The New Fox Sports Score Bug is Awful
The iOS 7 of on-screen graphics
On Sunday, Fox Sports debuted an all-new, redesigned score bug during Super Bowl 59. A score bug is, as defined by Wikipedia, a “digital on-screen graphic… displayed… during a broadcast of a sporting event in order to display the current score and other statistics.” Fox’s score bug has been center-aligned for years to accommodate vertical videos posted to the internet, and I generally liked the network’s old design for myriad reasons. The new one breaks everything good about its predecessor, and I think the reasons for criticizing it are more complicated than the innate nature of humans to dislike change. For context, here are the two items in question — first new, second old:


For me, every score bug on television must meet the following criteria:
- It must have high contrast. Black text on a white background is preferred, but white text on black is also acceptable.
- The font must be in boldface with clear, sans-serif numerals.
- It must emphasize color to differentiate teams rather than flags or logos, which are indiscernible or hard to decipher from faraway distances or odd angles.
- It must occupy as little vertical and horizontal space as possible while maintaining a large font size.
- It must clearly emphasize which team or player has possession of the ball at all times.
Fox’s old chyron accomplished most of these objectives well enough for my liking. The numbers were white on a black, gradient background, which was great — a remarkable change from tennis score bugs, which are tiny with bad contrast. The logos of the teams were surrounded by their colors, which made it easy to check the score without looking too hard. It was clear enough which team had possession by the white line that appeared above the score. And perhaps most importantly, the score bug was compact while retaining readability, which made it less distracting while providing ample utility. My only complaint was that I felt the design looked too busy, almost like it was made for the 2010-to-2015 era of user interface design. I don’t think there was a doubt in any designer’s mind that the Fox chyron could do with bold gut and redo.
So, on Sunday, the update came, and it was horrendous. From the moment I laid my eyes on it, it looked like it wasn’t rendered properly or that half the chyron was missing. The timer in the center with the translucent gray background is by far the laziest design I’ve seen on national television in recent history, and while I think the contrast between the text and background is acceptable, I at least think the corners should be rounded. And Fox practically gave up when designing the numbers, which are aggravating beyond belief. It’s not the size that bugs me; it’s that they have no background color at all. It might be that I’m especially persnickety about contrast, but the subtle gradient on the numerals isn’t enough for me. They should have a color background or, even better, a near-pitch-black surface like Apple TV+’s Friday Night Baseball score bug.
I actually think Apple’s score bug nails it, albeit I think it could do with more color to differentiate between teams. (Jason Snell at Six Colors has good images on his post about Friday Night Baseball from a few years ago.) The numerals are bold and clear, the graphic isn’t too large, and it uses varying amounts of transparency to guide the eyes to the most important information first. It perfectly exemplifies my biggest gripe with Fox’s new chyron: it’s laid out unnaturally for English readers. English is read left to right and top to bottom, and thus, the most important information in any graphic should be at the top left because that’s where our eyes are most inclined to look first. Because Fox’s new score bug is so large, it’s unnatural to begin at the center; I start reading from left to right. Suddenly, the score bug isn’t so glanceable anymore. Apple’s design maximizes design versatility by center-aligning information at the top.
The new Fox chyron has no information density or architecture whatsoever; it’s entirely unclear what someone should pay attention to at just a glance. The team names are highlighted in their colors, but that’s not the most important part of the bug: the score is. The score and teams have no continuity; they’re almost on separate lines due to the horizontal line gap. The only way to read the new graphic is from left to right: Kansas City, 0, 1:49 remaining, second quarter, 17, Philadelphia. A good score bug should place the scoring information first, at the top, hierarchically: Kansas City, 0, 17, Philadelphia. The current down is important in football, but nothing trumps the score, which must always reign supreme. The layout of the new score bug is too horizontally focused, which has the unfortunate side effect of making the graphic too large. It’s almost always easier to lay text out vertically than horizontally to preserve screen real estate — it’s just a more compact layout.
There are elements of the new design that I appreciate, like the letters representing the teams over the logos that casual viewers don’t seem to remember, or the highlight color behind the down number to instantly relay which team has possession, which is significantly more readable than the previous design. But mostly, I believe the few strengths outweigh the numerous shortcomings: the layout is awful, it’s lacking in contrast, and the minimalist design just doesn’t fit with the theme of the broadcast. On the third point: I contend the previous score bug was too flashy and carried too much 2010s aesthetic, but the new one is arguably worse. It reminds me of iOS 7, when Jony Ive, Apple’s ex-chief designer, flattened all of the life out of iOS and made the operating system scream monotony. Again, minimalism isn’t a bad thing: look at Apple’s Friday Night Baseball chyron for an example. But the hard 90-degree angles and boxy backgrounds aren’t elegant or tasteful and are too bland for my liking.
Perhaps I’m asking too much from a mediocre sports broadcaster, but it’s evident that the new design is a significant regression from the previous version. I don’t think Fox should eliminate the new design, however. Here are my proposals to improve the existing design, ranked in order of importance:
-
Place the scores at the top and everything else one level below the main data. The old chyron accomplished this perfectly: Use translucency and color to separate the two levels and establish a hierarchy.
-
Use gradients instead of solid color blocks. The entire score bug should be one continuous piece, not floating tiles hovering over the field with hard, uninviting corners. Use a large enough rounded rectangle with gradients for the team’s colors, being sure to keep the initialisms over the un-discernible logos.
-
Retain transparency where it’s needed. The colors can be slightly translucent so long as they don’t impact contrast. Prefer solid-colored numerals over gray gradients, but use translucency in the background to make the score bug less conspicuous overall. The current version, again, has two floating tiles of color suspended in mid-air. It doesn’t look like a chyron — it looks out of place.
Interface design is difficult, and even Apple took years to perfect iOS’ design post-iOS 7. But the way Apple addressed iOS 7’s shortcomings was by slowly incorporating shadows, depth, and textures into its operating systems in the coming years. Notably, this wasn’t a pivot back into skeuomorphism as much as it was an introduction of basic digital-first materials in software design. The Dynamic Island’s rounded corners and whimsical animations don’t necessarily model a real-world object, but they add character to the operating system after crossfades and harsh transitions took over in 2013. macOS 11 Big Sur re-introduced rounded corners after macOS 10.10 Yosemite removed them, creating a stoic, business-like design. iOS now uses three shades of certain system colors — primary, secondary, and tertiary — to designate importance.
Fox’s new score bug is an iOS 7-style pivot from the textual design of the previous graphic. That sets it up for a long career in the 2020s, but Fox needs to bring humanity back into the design, incorporating curves, hierarchy, depth, and texture into a more pleasing, sensible chyron. Describing a score bug like this might seem inappropriate for such a simple, almost unimportant element when the primary focus should be on the game, but it’s important to make information glanceable when so many people need to look at it for hours. The best designs are the ones that go unnoticed except for when they change, and Fox’s latest design is too distracting and attracts too much attention. A score bug should be subdued and easy on the eyes, and the new one is anything but.
Britain Reportedly Forced Apple to Add Advanced Data Protection Back Door
Joseph Menn, reporting exclusively for The Washington Post:
Security officials in the United Kingdom have demanded that Apple create a back door allowing them to retrieve all the content any Apple user worldwide has uploaded to the cloud, people familiar with the matter told The Washington Post.
The British government’s undisclosed order, issued last month, requires blanket capability to view fully encrypted material, not merely assistance in cracking a specific account, and has no known precedent in major democracies. Its application would mark a significant defeat for tech companies in their decades-long battle to avoid being wielded as government tools against their users, the people said, speaking under the condition of anonymity to discuss legally and politically sensitive issues.
Rather than break the security promises it made to its users everywhere, Apple is likely to stop offering encrypted storage in the U.K., the people said. Yet that concession would not fulfill the U.K. demand for backdoor access to the service in other countries, including the United States.
The office of the Home Secretary has served Apple with a document called a technical capability notice, ordering it to provide access under the sweeping U.K. Investigatory Powers Act of 2016, which authorizes law enforcement to compel assistance from companies when needed to collect evidence, the people said.
It’s already possible for governments around the world — including the United States, via the Federal Bureau of Investigation — to subpoena Apple and receive access to an entire user’s Apple account, including their photos, text messages, and iPhone backups, as long as the account doesn’t have Advanced Data Protection enabled. Advanced Data Protection, introduced in December 2022 after years of setbacks due to government pressure, effectively hands the user the encryption key to their Apple account; in the case of traditional Apple accounts, Apple stores a second copy of the encryption key on its servers. It’s a hassle to enable Advanced Data Protection, mainly because it requires storing safely a 35-character recovery key, which can be used to decrypt the data in the event a user loses access to all of their Apple devices, which usually store the keys. Most users don’t turn it on because if they lose that recovery key, Apple can no longer let them into their account.
Still, though, privacy-minded users like myself choose to enable Advanced Data Protection for added security. I’ll never be anywhere without my iPhone, and I’ve used the same passcode on it for years, so it’s burned into my muscle memory. If I were to lose my iPhone and my house with all of my Apple products in it burned down on the same day, I’d probably have bigger problems than being locked out of my Apple account. What’s more likely is that some hacker gets access to my Apple account credentials or designs a social engineering ploy to cajole Apple Support to reset my password — Advanced Data Protection shields against both plausible scenarios. And, perhaps most importantly, the government can’t subpoena Apple for any of my data. It’s not like I have anything to hide, but I don’t want the government to ever gain access to my private information. With President Trump’s FBI, subpoenas into political antagonists are about to become much more common, and I’m protected against that threat with Advanced Data Protection.
The British government isn’t happy with its citizens living a private life away from the government’s eyes, though — or, perhaps even worse, any citizen in any country having any semblance of privacy. Since when do British laws apply outside Great Britain (and Northern Ireland)? I’m confident Apple won’t back down to the British, just like it stood up against the FBI’s incursion after the San Bernardino terrorist attack, but I’m unsure how it’ll deal with the demand to subpoena every account worldwide. What right does Britain have to enforce its law on another country’s soil? That’s like a U.S. police officer going to England and arresting a kid for drinking at 19. If the British are fighting an international criminal scheme, that’s great — work with the countries the suspects are in and obtain a warrant through their federal law enforcement. If the foreign nation can’t get access to a user’s data because it’s encrypted, so be it, but just because one country wants access to its citizens’ data doesn’t mean encryption should be banned from the planet.
Apple won’t give Britain a back door into Advanced Data Protection — that’s impossible without tossing a secret encryption key to the government right before locking a user’s account down. But even if nothing happens — as I suspect it’ll go down — this is a dangerous precedent for a Western democracy. If Britain gets even a sliver of what it wants, it opens up the floodgates for the regulation-thirsty European Union and fascist, Elon Musk-led, lawless U.S. kleptocracy. The Trump administration openly and gleefully defies court orders with a direct constitutional precedent — who is to say it wouldn’t immediately demand Apple unlock millions of Apple accounts owned by Democrats because the British also got their way in?
Apple isn’t even allowed to discuss this dictatorship-esque coercion by the British government, and if it wasn’t for leakers within either Apple or the Home Office, the public would never know about the incursion. That’s genuinely frightening. At a time when people’s lives are in danger due to a rogue Western political administration wreaking havoc on a country that used to paint itself as the arbiter of democracy, Britain is sending a message across the pond that being China-like is acceptable. This puts the data of millions of Europeans and Americans at risk. It tests the limits of government so unnervingly and despicably. Encryption is a fundamental human right, and when Western democracies eliminate people’s right to free expression, citizens should fight back with force. (And enable Advanced Data Protection, regardless of where you live.)
On the Advent of ‘Timeline Apps’
Federico Viticci, writing at MacStories:
I think both Tapestry and the new Reeder are exquisitely designed apps, for different reasons. I know that Tapestry’s colorful and opinionated design doesn’t work for everyone; personally, I dig the different colors for each connected service, am a big fan the ‘Mini’ layout, and appreciate the multiple font options available. Most of all, however, I love that Tapestry can be extended with custom connectors built with standard web technologies – JavaScript and JSON – so that anyone who produces anything on the web can be connected to Tapestry. (The fact that MacStories’ own JSON feed is a default recommended source in Tapestry is just icing on the cake.) And did you know that The Iconfactory also created a developer tool to make your own Tapestry connectors?
My problem with timeline apps is that I struggle to understand their pitch as alternatives to browsing Mastodon and Bluesky (supported by both Tapestry and Reeder) when they don’t support key functionalities of those services such as posting, replying, reposting, or marking items as favorites.
Maybe it’s just me, but when I’m using a social media app, I want to have access to its full feature set and be able to respond to people or interact with posts. I want to browse my custom Bluesky feeds or post a Mastodon poll if I want to. Instead, both Tapestry and Reeder act as glorified readers for those social timelines. And I understand that perhaps that’s exactly what some people want! But until these apps can tap into Mastodon and Bluesky (and/or their decentralized protocols) to support interactions in addition to reading, I’d rather just use the main social media apps (or clients like Ivory).1 To an extent, the same applies for Reddit: if neither of these apps allow me to browse an entire subreddit or sort its posts by different criteria, what’s the point?
I really enjoyed Viticci’s piece, a link post to an article from David Pierce at The Verge covering The Iconfactory’s new Tapestry app. The concept of “timeline apps” (thanks to Pierce for the phrase) has been floating around in my head for a few months now since the release of the new Reeder, a subscription product that combines Bluesky, Mastodon, RSS — really simple syndication — podcasts, and more into one timeline. The new Reeder was such a departure from the previous RSS-only version that it required me to look at it from the perspective of someone who was new to RSS and couldn’t quite grok the point of it when social media already serves as an excellent, oftentimes personalized link aggregator. The chronological timeline-style nature of RSS makes it a convoluted solution for the vast majority of people, so Reeder is a perfect middle ground between chronological timelines and social media algorithms.
I really wanted to try the new Reeder, and I even subscribed to it for a month to give it a shot, abandoning my beloved NetNewsWire for a few days to see what it was like. I found myself less confused after my flirt with the idea but disheartened simultaneously. I really like Reeder and Tapestry — they’re gorgeous apps designed by talented independent developers with a knack for good design. Yet, I just have no place for them in my life. I use RSS to read the news in a chronological, unsorted format where I can pick and choose what I want to read. If I ever want to see what everyone else is reading, I can go to Bluesky or Threads, which are amalgamations of everything people I follow are into. To check what’s trending — or if I’m in a pinch and really need what’s important — I check a site like Techmeme or Political Wire by Taegan Goddard. Timeline apps don’t fulfill either of those needs well enough for me. They look like social media but aren’t as personalized as Bluesky or Threads.
And, as Viticci writes, if I see a social media post, I’ll probably want to like it or reply. Timeline apps are read-only, which makes sense from an RSS standpoint, but it waters down social media for me. In my eyes, the news and social media are related but separate media sources, and I appreciate viewing them discretely in their own bespoke apps. There’s a reason I avoid following news sources on social media, with the exception of Techmeme and The Verge because I typically check social media before RSS and need critical news on my timeline. Timeline apps are sub-par social media clients because they’re designed to bridge the gap between feeds and stories. They’re meant for an audience accustomed to feeds and stories in one app.
Yet I keep coming back to timeline apps because I find them delightful. I don’t want RSS and social media to be in one, but I do want an RSS reader with a smidgen more organization than only folders. Ultimately, I do enjoy social media and don’t begrudge my time on it, unlike some other RSS users, and replicating that experience with hard news would be an interesting concept. Tapestry nails the user interface of a lightweight “catch-up-and-leave” app so well, which makes sense coming from the company that made Twitteriffic, the ultimate app to look at while waiting on the toaster. To me, RSS is a sit-down experience where every article is meant to be opened and read, whereas social media often turns into mindless scrolling. There’s nothing bad about mindless scrolling in moderation, and Tapestry understands this.
When an item is tapped, Tapestry doesn’t just open the full article, unlike traditional RSS client. It displays the headline, a hero image, and the description provided by the RSS feed. For instance, a New York Times article will display the author-written blurb at the top of the page. (If a description isn’t provided, the app clips the article’s text after about 500 words.) If I want, I can tap the article one more time to open it in the in-app browser, just like social media, but if I choose to save it for later or disregard it entirely, there’s no pressure on me to indicate so, i.e., there’s no read/unread marker. Tapestry is just a timeline of links and shorthand clips of text. It’s not meant to be an RSS reader, but it’s so much more than social media. It’s uncannily reminiscent of Google Reader and the heyday of short, link blogs.
I love Tapestry and Reeder so much. Reading requires at least some attention, but social media scrolling doesn’t because it removes the pressure of having to do something with what someone has just read. I guess I’m reading the news on social media, but it doesn’t feel like I’m reading down the list of the going wrongs in the same way RSS does. Tapestry is an RSS reader that addresses what’s occasionally my biggest gripe with RSS: how boring it gets. I find it such an amazing app for wasting time.
That’s also exactly why I can never find a use for it: I need RSS for my job. I find stories to write about using RSS; I don’t use social media for that. Tapestry can never negate my need for a proper, NetNewsWire-like RSS solution, and it’s not good enough to replace any of my plethora of social media apps. Plainly, I have no use for it. No matter how much joy it gives me, I can’t find a place to squeeze it in. I realize this is a shameless first-world problem — and believe me, I feel shame in writing about it — but it’s a problem I’ve been trying to solve for a few months. It’s not Tapestry or Reeder’s fault — it’s my fault for having a rigid media consumption diet impossible to break away from. Does it work for me? Yes. But I also suffer from shiny object syndrome, and the fresh, hot bits are way too enticing for me to ignore.
Even if you don’t plan on using it, give Tapestry a shot. It’s free in the App Store.
Elon Musk’s ‘Department of Government Efficiency’ Operation is a Coup
A large assortment of New York Times journalists, reporting Monday in a piece titled “Inside Musk’s Aggressive Incursion Into the Federal Government”:
In Elon Musk’s first two weeks in government, his lieutenants gained access to closely held financial and data systems, casting aside career officials who warned that they were defying protocols. They moved swiftly to shutter specific programs — and even an entire agency that had come into Mr. Musk’s cross hairs. They bombarded federal employees with messages suggesting they were lazy and encouraging them to leave their jobs.
Empowered by President Trump, Mr. Musk is waging a largely unchecked war against the federal bureaucracy — one that has already had far-reaching consequences.
Mr. Musk’s aggressive incursions into at least half a dozen government agencies have challenged congressional authority and potentially breached civil service protections.
Top officials at the Treasury Department and the U.S. Agency for International Development who objected to the actions of his representatives were swiftly pushed aside. And Mr. Musk’s efforts to shut down U.S.A.I.D., a key source of foreign assistance, have reverberated around the globe.
The reporters continue:
Since Mr. Trump’s inauguration, Mr. Musk and his allies have taken over the United States Digital Service, now renamed United States DOGE Service, which was established in 2014 to fix the federal government’s online services.
They have commandeered the federal government’s human resources department, the Office of Personnel Management.
They have gained access to the Treasury’s payment system — a powerful tool to monitor and potentially limit government spending.
Mr. Musk has also taken a keen interest in the federal government’s real estate portfolio, managed by the General Services Administration, moving to terminate leases. Internally, G.S.A. leaders have started to discuss eliminating as much as 50 percent of the agency’s budget, according to people familiar with the conversations.
Perhaps most significant, Mr. Musk has sought to dismantle U.S.A.I.D., the government’s lead agency for humanitarian aid and development assistance. Mr. Trump has already frozen foreign aid spending, but Mr. Musk has gone further.
USAID is now dead, with Secretary of State Marco Rubio assuming the top role in the organization. Thousands of foreign aid programs are completely gone, especially in South Africa, where Musk is from. Musk works as a “special government employee,” according to Karoline Leavitt, the White House press secretary who doesn’t even know the White House’s official position on some of the most consequential executive actions. (“The binder is in my head,” she says, referring to the binder of files press secretaries typically carry around.) This “special” access essentially gives him access to everything in Trump’s Washington, which is completely psychotic. “Special government employees” are only part-time consultants who offer experience from their own fields. They don’t have the right to fire whoever they want for fun on a Saturday.
Over the weekend, Musk waltzed into the Treasury Department and demanded access to the entire payment network the federal government uses to issue grants and loans. The entire thing. Treasury Department officials, who are career workers and not political appointees, and thus who can’t be fired by the president or any of his advisers, rejected Musk’s offer. They were placed on administrative leave the next morning. The same went for USAID, which Musk called a “criminal organization” for some ridiculous reason, probably because it gives money to anti-apartheid efforts in South Africa, which Musk, a noted neo-Nazi, staunchly rejects. Either way, both departments are now under full control by Musk.
It’s worth noting why this is specifically a coup and not just a deranged, rogue administration hellbent on destroying the United States from the inside. (It is, but that’s not relevant here.) The White House — also known as the executive branch — has absolutely zero discretion over federal spending. None, nil, null, naught, nothing, nada. If Congress passes a law that tells the president to spend money on something, that money must be spent that way. (Trump’s refusal to obey this core tenet of the American government caused his first impeachment in 2020.) The president, nor anyone who works for him, cannot disobey the law that Congress signed. He can veto a budget Congress passes, but he can’t violate the law. Appropriations are laws like any other statute passed by the legislature. Here’s Article I, Section 9, Clause 7 of the Constitution:
No Money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law; and a regular Statement and Account of the Receipts and Expenditures of all public Money shall be published from time to time.
That statute isn’t ambiguous in the slightest. The laws are made by Congress, and no money can be appropriated without its approval. When the president wants to, say, suspend arms sales or foreign aid to a country on suspicion that it has broken the law or conditions placed on that aid, he may do so, but he must notify Congress if he pauses sales for 90 days or longer. This is the rule for everything in the federal government: While the executive branch enforces the law, it does not make the law. By turning off the USAID spigot, which was authorized by Congress months before Trump made his return to the Oval Office and Musk got his fingers on the federal government, the Trump administration is in blatant violation of not only the law Congress passed which appropriates funds to USAID, but also the foundational document of this government. This is entirely unconstitutional, and it’s being undertaken by an unelected billionaire bureaucrat who hasn’t even signed the ethics papers required to be a government employee.
Musk, who has no qualification nor right to be in the government whatsoever, came in with his billions and started firing people for no reason. Nobody has the right to fire these people other than career officials themselves. No political appointee can fire a career worker due to a law called the Civil Service Reform Act, passed in 1978. Civil workers are in a different class from political appointees associated with the administration, and Musk has no right to fire them or place them on leave. This is a coup, plain and simple — an unelected, unauthorized, unwelcome psychopath with a boatload of money is waging war against the United States and misusing tax money collected by Congress to enrich himself.
Musk has total, unfettered access to the coffers of the U.S. government. He can shut the whole thing down, change how money is distributed, or even worse, turn off programs he doesn’t like and use the remaining money to pay his companies. He controls the Treasury Department, for heaven’s sake. This is a brazen attack on the fundamental sovereignty of the United States. An immigrant from South Africa has infiltrated our country’s money supply and is using it to enrich himself without the knowledge or approval of the president, who’s too busy waging a trade war with Canada and Mexico to be bothered with anything Musk is doing. The people in charge of our country’s most important assets are 19-year-old college freshmen who named themselves after male genitalia on LinkedIn. (But saying that on X results in an immediate account suspension, which is illegal.) Here’s a story from some reporters at Wired, titled “A 25-Year-Old With Elon Musk Ties Has Direct Access to the Federal Payment System”:
A 25-year-old engineer named Marko Elez, who previously worked for two Elon Musk companies, has direct access to Treasury Department systems responsible for nearly all payments made by the US government, three sources tell WIRED.
Two of those sources say that Elez’s privileges include the ability not just to read but to write code on two of the most sensitive systems in the US government: The Payment Automation Manager (PAM) and Secure Payment System (SPS) at the Bureau of the Fiscal Service (BFS). Housed on a top-secret mainframe, these systems control, on a granular level, government payments that in their totality amount to more than a fifth of the US economy.
And here’s another story about how Musk’s lieutenants at DOGE have access to people’s Social Security numbers from Caleb Ecarma and Judd Legum at Musk Watch:
Several of Elon Musk’s associates installed at the Office of Personnel Management (OPM) have received unprecedented access to federal human resources databases containing sensitive personal information for millions of federal employees. According to two members of OPM staff with direct knowledge, the Musk team running OPM has the ability to extract information from databases that store medical histories, personally identifiable information, workplace evaluations, and other private data. The staffers spoke on the condition of anonymity because they were not authorized to speak publicly and feared professional retaliation. Musk Watch also reviewed internal OPM correspondence confirming that expansive access to the database was provided to Musk associates.
The arrangement presents acute privacy and security risks, one of the OPM staffers said.
Among the government outsiders granted entry to the OPM databases is University of California Berkeley student Akash Bobba, a software engineer who graduated high school less than three years ago. He previously interned at Meta and Palantir, a technology firm chaired by Musk-ally and fellow billionaire Peter Thiel. Edward Coristine, another 2022 high school graduate and former software engineering intern at Musk’s Neuralink, has also been given access to the databases.
This is a complete hostile takeover of the federal government, much like Musk’s haphazard acquisition of Twitter back in 2022, except this time involving the most important collection of individuals in the world, bar none. This isn’t a social network — this is people’s livelihoods. Tens of millions of people rely on food stamps and Medicare to survive. Hundreds of thousands of businesses — including Musk’s own — elected favorable members of Congress to pass appropriations laws that would benefit their companies. Millions of Americans voted election after election to ensure their voices were heard. This is how a democracy works. Trump, but more especially Musk, is tearing down the fabric of American democracy. We live in an autocracy now controlled by an unelected billionaire who broke into the federal government in violation of every single foundational document this country was built upon.
I fully realize I sound insane writing this. How could the first and strongest democracy in the world break so quickly in just a few weeks? I feel crazy. I feel like a crank who’s lost their mind. I cope with that feeling by writing about my thoughts in a not-totally-rambling way. I hope everyone reading this finds a sliver of sanity and clings to it for dear life.
Apple Scraps 1st Iteration of AR Glasses in Spite of Meta’s Orion Demo
Mark Gurman, reporting for Bloomberg:
Apple Inc. has canceled a project to build advanced augmented reality glasses that would pair with its devices, marking the latest setback in its effort to create a headset that appeals to typical consumers.
The company shuttered the program this week, according to people with knowledge of the move. The now-canceled product would have looked like normal glasses but include built-in displays and require a connection to a Mac, said the people, who asked not to be identified because the work wasn’t public. An Apple representative declined to comment.
The project had been seen as a potential way forward after the weak introduction of the Apple Vision Pro, a $3,499 model that was too cumbersome and pricey to catch on with consumers. The hope was to produce something that everyday users could embrace, but finding the right technology — at the right cost — has proven to be a challenge…
The decision to wind down work on the N107 product followed an attempt to revamp the design, according to the people. The company had initially wanted the glasses to pair with an iPhone, but it ran into problems over how much processing power the handset could provide. It also affected the iPhone’s battery life. So the company shifted to an approach that required linking up with a Mac computer, which has faster processors and bigger batteries.
I was initially puzzled by this report until I read the paragraph about how the device was intended to be connected to an iPhone, similar to how Apple Vision Pro connects to an external battery back. That product has been rumored for years, and I remember briefly touching on it in an article before Apple Vision Pro launched in 2023. Apparently, Apple’s design crew decided to make it Mac-dependent later in the process, which Gurman cites as one of the key reasons the project was canceled. In that April 2023 article, I wrote:
This product feels like a stepping-stone to the future that Apple is actually working on and believes in, which is AR glasses. That product has real potential — potential where it’s the only product you’ll have to carry and potential for ambient computing. Imagine glasses with Apple’s own, in-house LLM built-in that can guide you throughout your day, wherever you are — essentially, an Apple Watch turned up to 11.
Until a few days ago, I was still under the assumption that Apple Vision Pro was a mere stepping stone for an eventual AR product that ran visionOS but was connected to either an iPhone or an external compute puck — and I was correct until Apple decided to can the project. I don’t think Apple will ever discontinue the Apple Vision Pro line of virtual reality headsets because AR glasses inherently can’t be immersive, and Apple has already invested time and money — though perhaps not enough — into creating 3D, 180-degree immersive experiences for visionOS. They won’t disappear, and there will be multiple generations of Apple Vision Pro.
But AR glasses could be what the iPad was to the iPhone. Initially, the iPad ran iOS, just like the iPhone, and the two products largely functioned the same. They were just meant for different circumstances: the iPad is a sit-on-the-couch kind of lounge computer, whereas the iPhone is fundamentally an on-the-go internet communicator. They’re not mutually exclusive purchases. I could see a future where people own both a pair of Apple AR glasses and an Apple Vision Pro for home use — they would serve separate markets but run the same operating system.
If Gurman is to be believed — and I’ve been burned by not trusting his reports before — all of this has gone with the wind. Apple is no longer working on any imminent AR product whatsoever and is instead focusing its extended reality efforts solely on a low-cost Apple Vision product. Gurman, in typical fashion, doesn’t explicitly say that, but it’s understood that the Mac- or iPhone-connected AR glasses are a predecessor to an independent set of eyewear, à la the Meta Ray-Ban specs. Nobody believes Apple Vision Pro will tote the battery pack forever; it’s just a temporary measure to reduce the headset’s weight until Apple can figure out how to bring it into the main computer.
Similarly, I don’t think the AR glasses will forever require being tethered to a Mac. Gurman likens Apple’s now-defunct project to the Xreal One, a set of clunky glasses that mirrors a Windows computer desktop in AR, but I think Apple’s product would’ve just relied on the powerful M-series processors of modern Macs to handle visionOS processing. It’s practically impossible to fit an M2-like chip in the frame of slim glasses, so Apple would have to outsource that computing somewhere to run a full-blown operating system. But while Gurman’s outlined challenges are mostly true — it’s unintuitive, requires the purchase of a separate device, etc. — I think it’s a good first step to ship by 2027.
Last September, Meta demonstrated its AR glasses prototype to select members of the media, called Orion. It, too, offloaded computing to a little compute puck tethered to the glasses and also required users to strap on a highly engineered wristband to detect hand and finger movements even when out of the cameras’ view. I brushed it off as impractical because that’s largely true, but more importantly, I said it didn’t matter what Meta showed in a highly controlled media environment because Apple was already at work on a real product that would ship soon. Now, I can’t make the same point. Apple is, once again, behind, and if it doesn’t ship something by the end of the decade — recall we’re already more than halfway through — its winning streak is over. Apple Vision Pro is undeniably an embarrassing flop, and it needs to catch up quickly.
I’m not saying Apple doesn’t have any plan to make AR glasses ever in its future — as indicated by Sunday’s Gurman report in which he said Apple “explored making smart glasses that would rival the digital Ray-Bans offered by Meta” — but the first stage in that journey has been canceled. It needs to move quickly instead of canceling projects that were already moving. The Mac-connected AR glasses would’ve been a great middle point and something to ship in just a few years. But now, Apple has no plans for real hardware, rather focusing its efforts on conceptual software. That’s a troubling development.
I’m quite frustrated with Apple as of late. Apple Intelligence is an abysmal disaster in the headlines for ridiculous summarizations of simple news stories. Apple Vision Pro, its product of the decade, is too heavy and has zero compelling content for users. The iPad’s software is still lackluster with no rumored improvements. The company has no plans for the future other than a midrange slim version of the next iPhone, and its chief executive is cozying up to fascists while still being faced with the wrath of a 10 percent tariff on all Apple products.
Here’s Apple’s situation as of February 2025: iPhone prices will increase by at least 10 percent in September, the last leg of Apple Intelligence in iOS 18 is still not even in beta, the full-blown large language model is set to come out in spring 2026, iOS users are lambasting Apple Intelligence summaries for being utterly laughable, there’s no plan to update Apple Vision Pro this year or make up for a lack of content, and all of this is happening in a pivotal time for world politics where the president of the United States is starting a trade war with our country’s allies. Who knows what happens to the Chips and Science Act in a few months? What happens to Taiwan Semiconductor Manufacturing Company’s Arizona plant?
The only Apple product I truly find myself excited about is Apple TV+. The second seasons of “Shrinking” and “Severance” are incredible and Apple TV+ storytellers and filmmakers are being nominated for Emmy awards left and right. By contrast, every other product category Apple competes in is a disaster with more helplessness on the way. I’m not saying Apple is dying, but it needs to sort out a strategy for the next four years, after which Tim Cook, its chief executive, must retire to hell. Apple products will become more expensive in the coming months and the company is falling behind in an already bad technology market.
Apple’s three main goals for the Trump administration should be artificial intelligence, AR, and geopolitics. Currently, it’s finding itself falling behind in all three.
DeepSeek, a Chinese AI Company, Crashes the U.S. Tech Stock Market
Samantha Rubin, reporting for CNBC:
Nvidia lost close to $600 billion in market cap on Monday, the biggest drop for any company on a single day in U.S. history.
The chipmaker’s stock price plummeted 17% to close at $118.58. It was Nvidia’s worst day on the market since March 16, 2020, which was early in the Covid pandemic. After Nvidia surpassed Apple last week to become the most valuable publicly traded company, the stock’s drop Monday led a 3.1% slide in the tech-heavy Nasdaq.
The sell-off was sparked by concerns that Chinese artificial intelligence lab DeepSeek is presenting increased competition in the global AI battle. In late December, DeepSeek unveiled a free, open-source large language model that it said took only two months and less than $6 million to build, using reduced-capability chips from Nvidia called H800s.
Nvidia’s graphics processing units, or GPUs, dominate the market for AI data center chips in the U.S., with tech giants such as Alphabet, Meta, and Amazon spending billions of dollars on the processors to train and run their AI models.
DeepSeek’s mobile app launched earlier this month and quickly rose to the No. 1 place on the App Store in the United States. As John Gruber writes at Daring Fireball, it’s telling that this metric led to one of the biggest tech market crashes this decade. But, perhaps noticeably, Apple is not on the list of tech stocks that dropped on Monday; its stock is up by over 7 percent. I think the reason why is quite comical yet probably true: Wall Street doesn’t see Apple as an AI company. It wouldn’t be wrong — Apple Intelligence is a buggy beta mess making headlines for producing inaccurate summaries of important news notifications. I digress.
The panic over DeepSeek is largely psychotic on the tech “analyst” crowd’s part. Silicon Valley tech bros always panic about this kind of thing. As Casey Newton writes for his Platformer newsletter, the two main camps in the AI wars — accelerationists (e/acc) and effective altruists (doomers) — each have something to gain from causing turmoil. Venture capitalists often throw billions of dollars into various other industries that would benefit from a full-on trade war (or armed conflict) with China, and AI doomers are in it to sell solutions to halt the production of new AI. So when the stock market crashes, it’s probably not because everyone will die or Sputnik AI just happened.
I’m minimally concerned about DeepSeek’s impact on the American AI industry. The company’s latest model, R1, is free, open-source software and outperforms ChatGPT-4o, OpenAI’s finest. But that’s not a reason to worry, per se, because competition is always good and only pushes more innovation in capitalist countries. This isn’t a war yet, and I highly doubt it will turn into one anytime soon because China is significantly resource-limited. Nvidia’s stock went for a ride on Monday because Nvidia chips aren’t allowed anywhere near China per the Biden administration’s rules on chip exports — President Trump hasn’t rolled them back despite Nvidia asking him to — and investors who don’t know a thing about how neural processors work thought China did something incredible.
China, in fact, didn’t do anything worthy of awe. It did what it’s best at: propagandizing Americans. This stock market crash was Beijing’s plan all along. It worked for years post-ChatGPT to build a great large language model that would run on its own cheap chips and wow foreigners, causing them to panic-dump Nvidia stock long enough for the market to die and a new president to lift export restrictions in an attempt to salvage the economy. This is still a possibility because Trump is an oligarch-loving moron who knows less about AI chips than a second grader and would do anything to please billionaire donors, but the goal isn’t to outdo the United States in AI just yet. ChatGPT o3-mini, OpenAI’s next model rumored to be the best yet, isn’t even out until February at the earliest.
DeepSeek doesn’t have any juice. It can’t answer simple questions about President Xi Jinping of China or the Tiananmen Square massacre, and neither does it even allow sign-ups anymore because the Chinese state propagandists funding the project don’t know how to keep up with the massive influx of new users. DeepSeek’s models aren’t AI models — they’re state-funded media made to grab attention and headlines. (I guess it’s working.) DeepSeek doesn’t have to worry about pleasing investors or paying employees because the Chinese state takes care of the bill. It’s not a real company doing genuine business. I’m unsurprised Wall Street hasn’t caught on because only the world’s delinquents belong there, but it’s painfully obvious: no model of this caliber can run for free. Nothing is free in the world except when it’s being subsidized by one of the largest spreaders of propaganda in the world.
I’m sure the stock market will bounce back in a few days. But rich oligarchs, fearmongers, and Chinese propagandists will continue to have an uncanny amount of influence on our politics in the coming years. This is just Act 1 of that narrative.
OpenAI Launches Operator, a New AI Agent That Can Control a Web Browser
OpenAI, in a blog post last week:
Today we’re releasing Operator, an agent that can go to the web to perform tasks for you. Using its own browser, it can look at a webpage and interact with it by typing, clicking, and scrolling. It is currently a research preview, meaning it has limitations and will evolve based on user feedback. Operator is one of our first agents, which are AIs capable of doing work for you independently—you give it a task and it will execute it.
Operator can be asked to handle a wide variety of repetitive browser tasks such as filling out forms, ordering groceries, and even creating memes. The ability to use the same interfaces and tools that humans interact with on a daily basis broadens the utility of AI, helping people save time on everyday tasks while opening up new engagement opportunities for businesses.
To ensure a safe and iterative rollout, we are starting small. Starting today, Operator is available to Pro users in the U.S. at operator.chatgpt.com . This research preview allows us to learn from our users and the broader ecosystem, refining and improving as we go. Our plan is to expand to Plus, Team, and Enterprise users and integrate these capabilities into ChatGPT in the future.
One thing I noticed while watching OpenAI’s live stream announcing Operator is that the company truly thinks this is the next paradigm of artificial intelligence. I agree, to a certain extent, since the emphasis seems to be less on the large language models powering some of the optical character recognition and reasoning required to make decisions — i.e., where to click, how to click, and where to enter text, essentially how to use a computer — and more on the synchrony between the vision and reasoning aspects of the model. Operator needs to understand not only how a computer works but also the relationship between concepts, and that’s only possible with a natively multimodal model. It was obvious this was where ChatGPT 4o was headed when it was announced last spring.
To achieve true multimodality, OpenAI developed a new model that works with 4o: Computer-Using Agent. (Fire whoever named this.) I’m at least a little surprised GPT-o1 isn’t involved at all in this project, as I’d think that the LLM would require advanced reasoning capabilities to make decisions, but while o1 literally writes its thoughts out to provide an answer, CUA does the same through reinforcement learning — a technique in machine learning that trains a computer on how to do something by correcting its mistakes along the way and giving it rewards. It seems primal — like how someone would train a dog by giving it treats — but it works since humans also tend to perform better if given rewards. It’s reflected in the training data: when people compliment each other, the resulting outcome is almost always more favorable.
Instead of thinking through a problem by writing detailed explanations for each step of the process (o1), CUA does it natively since it’s been pre-trained on how to use a computer. The LLM does the work of reasoning through what a user means and how to get there, and the reinforcement-powered parts take over the rest. It’s a clever mechanism that perfectly articulates why AI scaling will never stop: LLMs aren’t the best of AI. LLMs are extremely proficient in manipulating prose since they have a lot. It’s like how traditional computers are fantastic calculators — they think in numbers and thus can compute even massive ones with ease. LLMs think in words (tokens, pedants) and can produce them the best. But when they’re given numbers — or worse, images — they fail spectacularly because they can’t put a picture or numbers into a black box and get another picture or calculation, like how they do for language. They need to convert that picture into words they can understand first.
Words make up a substantial portion of humans’ communication and work. Writing emails, sending text messages, producing reports and slides, and reading the news all involve words and, for the most part, only words. But humans also work with their hands and use their eyes to help. That’s why graphical user interfaces came to be — not everyone knows how a command line works, but everyone can grok the basic idea that files are in folders. GUIs are a digital metaphor for the real world, and the world doesn’t use words — it’s heavily reliant on understanding the visual relationship between physical objects through our five senses. LLMs are atrocious at this, but vision models have potential.
Using a computer, a GUI, requires proficiency in both visual and text processing, and that’s where CUA shines. But OpenAI’s spiel about how Operator is the best thing since sliced bread ignores that using a literal GUI computer made for people isn’t the best way to do most things online. GUIs were made because they’re the easiest for humans — “GUIs are a metaphor for the real world” — to use, but they aren’t even nearly the most efficient way for computers to interact. Application programming interfaces are the way computers speak to other, unrelated machines, and they’re the best way to craft agentic experiences. If Operator could make an API call to Trivago, the travel-booking website, instead of navigating to its website and clicking buttons as a person would, it could probably set up a reservation in seconds. It would still require AI to choose what API calls to make based on a user’s request — weights! — but it wouldn’t require the technological prowess Operator possesses.
But, alas, Kayak doesn’t have an API, and neither do hundreds of thousands of internet services humans access online. So, we’re stuck with Operator doing computer-like things in a human-centric world. The idea reminds me of humanoid robots, an imperfect solution to completing real-world tasks. Ford doesn’t employ thousands of humanoid robots to build cars on an assembly line — it builds a robot for each part of the car-making process. Operator is a humanoid robot working an assembly line, making me believe its existence is rather short-lived. To do tons of things on the web, Operator needs to pass Captcha tests designed to keep robots off the internet. Operator passes them with ease, which is impressive on its own. But why are we implementing Captchas just for robots themselves to pass them? Robots passing Captchas aren’t the future, and neither is Operator, which is a (crucial) stepping-stone to something much, much bigger.
Maybe that explains the ridiculous, comical $200-a-month price for entry, which is too high for even me to be interested.
What’s Old is Thin Again
Wes Davis, reporting from Samsung Galaxy Unpacked for The Verge:
Samsung just teased the Galaxy S25 Edge — the new ultra-slim entry into the Galaxy S25 lineup. The phone isn’t out yet, and Samsung hasn’t provided any details, but now we know it’s real. And we have pictures.
Like pretty much every phone, it’s a thin silver slab. It’s got two cameras on the back, rather than the three cameras you’d get with other S25 phones. The Edge is rumored to measure just 6.4mm thick, but my colleagues Allison Johnson and Vjeran Pavic, who are on the ground at Galaxy Unpacked and took the below photos, weren’t able to actually hold or measure the device to confirm.
We’re trying to get closer so we can show perspective, but the place is mobbed with people. There’s a lot of excitement about this phone. By comparison, though, the regular Galaxy S25 is 7.2mm thick. So, it’s… even thinner.
So apparently, 2025’s overarching phone theme is thinness, which reminds me of 2014 when Apple debuted the easily-bendable-to-the-point-of-ridicule iPhone 6 Plus. News outlets (Bloomberg) have been pretty relentless in saying Samsung “beat Apple to the market” with the Galaxy S25 Edge, but the phone isn’t even out yet. Apple is rumored to announce a slim iPhone later this year at its annual September event, and Samsung provided no release date for its thin phone. If I had to guess, that’s because it doesn’t exist.
Each of The Verge’s shots in the article shows the phone with a beige, swirly (figure eight?) wallpaper and nothing else. There is no software interface or any other identifiable features, probably because it doesn’t even have a processor to run software on. From what I can tell, it’s just a plastic replica with a screen attached and connected to some computer in the back. The S25 Slim isn’t real — it’s just a publicity stunt to “beat Apple at its own game.” Research and development on the product has presumably started, but Samsung doesn’t have a single functioning prototype of it yet. Typical for Samsung.
Samsung always follows in Apple’s footsteps, almost to its own chagrin. The company rarely ever has new features that haven’t been blatantly copied off of some other company’s flagship product. So this time, it wanted the news cycle for itself — in other words, the marketing department got control over the reigns for a bit. There’s always been daylight between the Samsung engineering teams — who make the second-most popular smartphones in the world — and the marketing team, whose focus is solely on making a fool of itself every September. When Samsung ridiculed Apple for removing the power adapter from iPhone 12’s box, it did the same just a few months later with the Galaxy S21 line of phones. When it teased Apple for removing the headphone jack in 2016, it followed right along just a few years later. It’s constant.
Samsung’s marketing team is slowly coming around to realizing the world has caught on. I can’t believe it took so long after Samsung literally copied the orange accent of the Apple Watch Ultra on its high-end Galaxy Watch Pro last year. So this time, thin is in, and Samsung quickly whipped up a 3D-printed block of metallic-looking plastic to get ahead of the curve. It’s so unsurprising and yet so shockingly blatant — how the world’s No. 1 smartphone manufacturer can, time and time, get away with copying Apple to such a high degree. I can’t tell if it’s overconfidence, malicious intent, or both.
About the name: I would bet every dollar to my name that Samsung’s marketing executives mulled over calling it the S25 Air, mirroring Apple’s rumored name for its thin phone, but (rightfully) decided against it for copyright reasons. In haste, they came up with “Galaxy S25 Edge,” borrowing the “edge” moniker from the “waterfall edge”-style S7. “Edge” used to mean something — users would be able to swipe from the side of the phone up toward the main display to invoke a control panel of sorts, similar to the new Control Center in iOS 18, for easy access to certain apps. Samsung ditched the Edge a few years ago because people hated it, and it doesn’t look like the S25 Edge has one either. But, alas, the phone is still called the Edge. Never change, Samsung.
Apple Modifies Notification Summaries in iOS 18.3; Now Enabled by Default
Chance Miller, reporting last week for 9to5Mac:
Apple released iOS 18.3 beta 3 to developers this afternoon. The update includes a handful of changes to the notification summaries feature of Apple Intelligence.
The changes come after complaints from news outlets such as the BBC. Two weeks ago, Apple promised that a future software update would “further clarify when the text being displayed is summarization provided by Apple Intelligence.”
Here are the changes included in iOS 18.3 for Apple Intelligence notification summaries:
- When you enable notification summaries, iOS 18.3 will make it clearer that the feature – like all Apple Intelligence features – is a beta.
- You can now disable notification summaries for an app directly from the Lock Screen or Notification Center by swiping, tapping “Options,” then choosing the “Turn Off Summaries” option.
- On the Lock Screen, notification summaries now use italicized text to better distinguish them from normal notifications.
- In the Settings app, Apple now warns users that notification summaries “may contain errors.”
Regarding that note about Apple Intelligence being a beta, here are Apple’s official iOS 18.3 release notes:
For users new or upgrading to iOS 18.3, Apple Intelligence will be enabled automatically during iPhone onboarding. Users will have access to Apple Intelligence features after setting up their devices. To disable Apple Intelligence, users will need to navigate to the Apple Intelligence & Siri Settings pane and turn off the Apple Intelligence toggle. This will disable Apple Intelligence features on their device.
So, in iOS 18.3, Apple Intelligence is no longer in beta.1 But I don’t think the distinction really matters much at all because Apple’s marketing wouldn’t lead anyone to believe Apple Intelligence is anything but a well-built, reliable piece of software. Here on Earth, the truth is far from Apple’s rosy picture painted on billboards across America. Beta or not, Apple Intelligence’s notification summaries are comically unreliable, factually incorrect, and straight-up grammatically awkward (see the headline of this post for an example).
The British Broadcasting Corporation complained to Apple over the holidays because Apple Intelligence incorrectly summarized a BBC headline about Luigi Mangione, the suspect in the UnitedHealthcare chief executive’s killing. The software said Mangione committed and only displayed a small glyph to the right of the blurb indicating that it had been written by artificial intelligence; the BBC app’s logo, however, was prominently displayed next to it, leading readers to believe that the fabricated summary was really from the BBC.
Apple’s response to the debacle was that Apple Intelligence was in beta, but by making it an opt-out feature — i.e., enabling it by default for the millions of iPhone 16 users in supported countries — Apple removed that (debatable) cover it could hide behind. Apple Intelligence isn’t in beta, and it hasn’t been for months — slapping a “Beta” label on it in Settings doesn’t change the fact that it’s heavily advertised when setting up a new compatible iPhone. Removing it further negates any possible excuse for Apple Intelligence summaries not being completely accurate.
It’s not like large language models are bad at summaries. In fact, they’re fantastic at them because LLMs are trained to synthesize the next most logical word in a sentence. When given a snippet of text, they boil it down to some weights, find what other weights correspond to the numbers originally given, and spit out a summary. This is what LLMs are best at. As an experiment, I tried running some botched Apple Intelligence summaries through ChatGPT — both the less-expensive, faster model and the latest 4o one — just to see how a reputable model would do, and ChatGPT aced the text. Its summaries were reliable, short, and grammatically correct.
I’d love to look at the prompt Apple is feeding its so-called foundation models before adding the notification’s content. I presume it’s in some organized data format, not plain text, but that should be fine for a model specifically trained on thousands of summaries. Even low-quality models fare well in summarization tests because this isn’t too difficult of a task for an LLM. I believe Apple’s models — no matter how low-quality they may be to run quickly enough so as not to create a delay from when a notification is sent from a server and when it’s displayed on a user’s device — aren’t what cause Apple Intelligence’s downright disturbing summaries.
The model’s context alters its ability to summarize a notification significantly. For instance, this is how I’ve been prompting ChatGPT to create notification summaries:
Your job is to summarize notifications. A user has received multiple breaking news notifications from The New York Times app. The first one is from 12:56 p.m. and reads, “Eighteen states sued to block an executive order that seeks to deny citizenship to babies born to unauthorized immigrants in the United States.” The latest one is from 4:29 p.m. and reads, “Pete Hegseth’s former sister-in-law made a sworn statement to senators that the secretary of defense nominee was abusive toward his second wife.” Summarize these notifications, with the most importance given to the newest notification, in a maximum of 20 words.
ChatGPT responded with this:
Defense nominee accused of abuse; 18 states challenge executive order denying citizenship to children of unauthorized immigrants.
I wish I could see what Apple Intelligence would’ve cooked up, but I can’t since The New York Times is a news app, and Apple Intelligence summaries are now disabled for them (temporarily, according to Apple) in iOS 18.3. (This is yet another update to address the BBC’s concerns.) Either way, after months of using Apple Intelligence on all my Apple devices, I’m certain it wouldn’t do even half as good as ChatGPT.
Apple Intelligence struggles with two main categories of notifications: short ones that don’t need summarizing and threads of long notifications with details. When presented with a short notification, Apple Intelligence, like any other LLM, just makes up information to fill its character limit. (You can see this in an example Miller posted on Bluesky.) When the software is given tens of notifications from different times and plentiful details, however, it doesn’t understand the contextual difference between a notification sent two hours ago versus a minute prior.
This is most noticeable in delivery notifications, where the status of an order is changing with each notification. Apple Intelligence doesn’t know how to process this, and its insistence on using semicolons to separate notifications into distinct parts creates nonsensical, useless summaries. For instance, three notifications that tell a user that their order is about to arrive, that it’s here, and that they should tip after it’s been delivered turn into one sloppy mess, and Apple Intelligence comes up with, “Order on the way; delivered; rate and tip.”
LLMs speak English well, and with a smidgen of context, iOS could do a much better job. I — a human who writes for a living — would discard the “order on the way” message entirely and summarize the notifications by writing: “Your order has been delivered at [time]. Rate and tip.” There’s no need for semicolons, but because summaries don’t display when each individual notification was sent (tapping on it expands them all but closes the summary), a timestamp could be helpful. If given the time, context, app, and notification title, Apple Intelligence could do this in just a few seconds.
For now, Apple Intelligence summaries aren’t even remotely ready for prime time. I understand the frustration within the company — it needs to iterate to get ahead of OpenAI and Google, and it needs to do so quickly — but shipping incorrect notifications to millions of people is a terrible way of achieving strategic goals. People’s iPhones are lying to them, and Apple can’t even accept minimum fault for its faulty software. The italicized text doesn’t make it clear to me that a summary is generated with AI — it just looks like a sloppy, out-of-place design. Does Apple use italics in any other part of the software? Perhaps that’s why it was implemented here, but it just looks awful and relays little to no information without already knowing italics mean Apple Intelligence.
Instead, I recommend Apple replace the app icon with an Apple Intelligence logo and minimize the icon to be in the lower left corner, almost like iMessage notifications, where the Messages app’s icon is displayed in the corner of a contact’s profile picture. Ultimately, the content displayed on the screen is from Apple Intelligence, not whatever app sent the notification, so that should be obvious. If Apple doesn’t like putting its name on these summaries, perhaps it should reflect why it’s so hesitant. Is it not confident in its software?
One more frustration: Apple Intelligence must stop summarizing spam text notifications. I got one about a toll I allegedly forgot to pay from a random iCloud email address, and Apple Intelligence perfectly summarized it — threat and all. People have asked me previously how I expect AI to detect a scam message, which is an insane question. ChatGPT has the world’s knowledge compacted into one text generation machine, and to think an LLM can’t use that knowledge to detect a scam and choose not to summarize it is ridiculous.
People have an inherent trust in Apple’s products. If Apple summarizes a notification incorrectly — or even worse, marks a scam email as a “priority” in the mail app — people are likely to believe that. “Well, Apple said it’s real, so it must be.” We’ve been teaching people for decades to check if an email or text is really from Apple, Google, the bank, etc., and these summaries are from Apple. Why shouldn’t users trust them? I brought up this same point when Google told its users to put glue on their pizzas last year: If a company has built its reputation around being an arbiter of facts, why is it suddenly acceptable to forgo the truth in favor of shoddy technology?
-
I’m catching flak for this since Apple Intelligence is still labeled as a beta in iOS 18.3. Here’s the dictionary definition for the phrase “beta test”: “a trial of… software… in the final stages of its development carried out by a party unconnected with its development.” Apple Intelligence is a now-shipping-by-default feature of iOS. It’s not a trial by any definition of the word “trial.” People aren’t trying anything; they cannot make the choice to opt in. No matter what Apple calls it, Apple Intelligence is no longer in beta. ↩︎
TikTok’s Temporary State of Limbo
Elizabeth Schulze, Devin Dwyer, and Steven Portnoy, reporting Thursday evening for ABC News:
The Biden administration doesn’t plan to take action that forces TikTok to immediately go dark for U.S. users on Sunday, an administration official told ABC News.
TikTok could still proactively choose to shut itself down that day — a move intended to send a clear message to the 170 million people it says use the app each month about the wide-ranging impact of the ban.
But the Biden administration is now signaling it won’t enforce the law that goes into effect one day before the president leaves office.
The TikTok and ByteDance ban law is set to go into effect on January 19, just a day before President-elect Donald Trump’s inauguration, so the decision not to enforce the law for one day appears to be a way for President Biden to deflect blame onto the new administration. The president-elect submitted an amicus friend-of-the-court brief to the Supreme Court a week ago asking the court to issue a stay on the law before the Trump administration takes control, but it’s unclear if the high court will capitulate to Trump’s request — the court’s website says decisions are expected to be issued Friday at 10 a.m., so it might become clear then.
But based on oral arguments last week, the situation doesn’t look good for TikTok. Before Biden’s plan was reported Thursday, I was entirely certain TikTok would be unavailable in the United States for at least Sunday due to a memorandum from the company stating it would shut down operations preemptively a day before the ban is set to take place, including for existing users. (The law only states Apple and Google must remove adversary-owned apps from their app stores; it gives no directions to TikTok directly.) Now, TikTok seems to be in a temporary, weekend-long state of limbo. The company could choose to take the app offline on Sunday to plan regardless of Biden’s intentions because it doesn’t want to break a law written by Congress, or it could scrap the idea and place its hopes and dreams in Trump’s hands.
I wrote last April, when the law was passed, that I found the probability of TikTok being banned “still thoroughly unlikely” because I thought Biden would win the election. I maintained that prediction (about TikTok, anyway) internally through the election campaign, but now that Trump is the next president, I’m really unsure. Trump is a very unpredictable politician with no clear sense of direction or policy, and he could suddenly choose to enforce the law from Day 1 to act tough on China. His amicus brief could just be an attempt to dupe China into thinking it has a friendly man on the inside, or he could be entirely serious after attributing part of his electoral success to TikTok. All bets are off in Trump’s second term, and I reckon TikTok is fully conscious of that.
By choosing to defy a law from Congress because an outgoing president — and incoming rabble-rouser — promised in words only, TikTok would be taking an extraordinary risk in a country whose government has never been kind to it. That’s why my personal take is that TikTok chooses to voluntarily summon some scare screens this weekend, encouraging users to lambast their lawmakers and disregarding Biden’s vague politically motivated promise. That prediction could change in mere hours based on what Trump and TikTok say in a game of press releases, but I think it’s sensible for now. TikTok was betting on the Supreme Court giving it a reprieve up until last week when oral arguments seemed to indicate the justices were firmly on the government’s side, so now its strategy — from what I can tell — appears to be to work out some deal with Trump.
As I wrote about Meta’s week of chaos, the only way to do business in America under Trump is to bend the knee and kiss the ring. Shou Chew, TikTok’s chief executive, appears to be doing just that — he’s scheduled to be seated in a position of honor alongside Elon Musk and Mark Zuckerberg, two other social media executives vying for Trump’s blessing. Earlier last year, I firmly believed TikTok’s fate lay in the courts; now, the company’s bets are all on Trump 2.0.
I would love for my April prediction to be proven correct — that TikTok never really gets banned. But in my defense, it was made at a very different time in American politics. Biden still hadn’t dropped out of the race, First Amendment lawyers all believed TikTok had a case in front of the Supreme Court, and Democrats still had a chance to control both houses of Congress. Anything could’ve happened on the campaign trail, and the law could’ve been moot right after November. It’s still my firm belief that if Vice President Kamala Harris won the election, she would’ve gotten Biden to issue an extension for TikTok’s divestment and then probably killed the law in springtime budget negotiations. But, alas, that future never came true, and chances are, TikTok will choose to voluntarily take itself offline in just a few days.
But on that last point, Hank Green, a famous YouTuber and TikTok creator, (correctly) wondered on Bluesky earlier Thursday why TikTok would, on its own volition, throw its creators under the bus when it could still run the app for the hundreds of millions of Americans who already have TikTok installed from before the ban. The answer is straightforward: TikTok is a psychological operation from the Chinese government to wreak havoc in American politics. TikTok wants its users to get riled up and effectively play defense for the Chinese Communist Party since none of the hundreds of millions of U.S. TikTok users have to register as foreign lobbyists. It wants to actively encourage its users to make life hell for American politicians. It’s a brilliant strategy. Here’s what I wrote about this information war in April:
Naturally, if TikTok vanishes in a year — a prospect that I think is still thoroughly unlikely — Americans will solely place the blame on their government, not on TikTok or China. And that point of contention between Americans and their government is exactly the reason why China doesn’t want to divest TikTok. The Chinese government wants power and strength; it wants to change the way Americans perceive it across the Pacific. This bill just gave China a brand-new, effective strategy. Nice work, Washington — you’ve been outsmarted by Beijing again.
Because the U.S. government is so comically useless that it can’t even write a national data privacy law, China won yet another part of this communication war. The biggest threat to the United States is not China, Russia, North Korea, or Iran — it’s the half of this country that refuses to participate in any governance whatsoever for its belief in strictly reactionary politics. Millions of Americans are falling prey to literal Chinese propaganda on Red Note (Mandarin Chinese: Xiaohongshu) — a Chinese-sanctioned version of TikTok where fan cams of Chinese police officers beating up civilians are galore and the search term “Tiananmen Square” is banned — because the U.S. government doesn’t understand how to write laws its citizens are interested in obeying.
The surge in traffic to Red Note can’t just be attributed to Western tankies being some of the most imbecilic human specimens on the planet. The United States, the stalwart of capitalism around the globe, is equally responsible.
Mark Zuckerberg’s Week of Being an Insecure Opportunist
Meta’s virtue signaler-in-chief has lots to say
Mark Zuckerberg, Meta’s founder, posted a long thread on Meta’s Twitter copycat, Threads, about updates to Meta’s content moderation policy, beginning a busy week for Meta employees and users alike. Here are my thoughts on what he said.
It’s time to get back to our roots around free expression and giving people voice on our platforms.
Great heavens.
1/ Replace fact-checkers with Community Notes, starting in the US.
As many others have said, I have never seen Meta fact-check posts that truly deserved fact-checking. It put a label on my thread saying Trump would win the election after the failed assassination attempt in Butler, Pennsylvania, but I’ve never seen a fact check implemented where it mattered. Community Notes, on the other hand, is phenomenal — albeit a stolen idea from Twitter’s Birdwatch, now X’s Community Notes. But the Zuckerberg of four years ago wouldn’t have decided to scrap fact-checking entirely — his instinct would’ve instead been to double-down and improve Meta’s machine learning to tag bad posts automatically. Meta is a technology company, and Zuckerberg has historically solved even its biggest social issues with more technology. To go all natural selection, “every man for himself” mode rings alarm bells.
Meta’s platforms suffer from severe misinformation, though probably not worse than the cesspool that is X. Facebook is inundated with some of the worst racism, sexism, misogyny, and hateful speech that consistently uses fake, fabricated information as “evidence” for its claims. President Biden’s administration admonished Meta — then Facebook — in 2021 for spreading vaccine misinformation; the president said the company was “killing people.” Twitter proactively removed most vaccine misinformation in 2021, but Meta sat on its hands until the Biden administration rang them up and asked them to take it down as it interfered with a crucial component of the government’s pandemic response. (More on this later.)
2/ Simplify our content policies and remove restrictions on topics like immigration and gender that are out of touch with mainstream discourse.
It’s hard to tell what Zuckerberg means from just this post alone, but Casey Newton at Platformer describes the changes well:
For example, the new policy now allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”
So in addition to being able to call gay people insane on Facebook, you can now also say that gay people don’t belong in the military, or that trans people shouldn’t be able to use the bathroom of their choice, or blame COVID-19 on Chinese people, according to this round-up in Wired. (You can also now call women household objects and property, per CNN.) The company also (why not?) removed a sentence from its policy explaining that hateful speech can “promote offline violence.”
So, “out of touch with mainstream discourse” directly translates to being allowed to say “women are household objects.” Here’s an experiment for Zuckerberg, who has a wife and three daughters: Go to the middle of Fifth Avenue and shout, “Women are household slaves!” He’ll be punched to death, and that’ll be the end of his tenure as the world’s second-most annoying billionaire. But on Facebook, such speech is sanctioned by the platform owner — you might even be promoted for it because Zuckerberg seems keen on bringing more “masculine energy” to his company. That’s not “mainstream discourse”; it’s flat-out misogyny.
This is where it became apparent to me that Zuckerberg’s new speech policy — which, according to The New York Times, he whipped up in weeks without consulting his staff after a retreat to Mar-a-Lago, President-elect Donald Trump’s home — is meant to be awful. It was engineered to be racist, sexist, and homophobic. It wasn’t created in the interest of free speech; it’s a capitulation to Trump and his supporters. The relationship between the president-elect and Zuckerberg has been tenuous, to put it lightly, but the new content policy is designed to repair it.
Trump has threatened Zuckerberg with jail time on numerous occasions for donating millions of dollars to a non-profit voting initiative in 2020 to help people cast ballots during the pandemic. (Republicans have called the program “Zuckerbucks” and have ripped into it on every possible occasion.) Facebook deplatformed him after his coup attempt on January 6, 2021, after he spread misinformation about the election results that year, and that enraged Trump, who vowed to go after “Big Tech” companies in his second term. Trump now has the power to ruin Meta’s business, and Zuckerberg wants to be on his good side after noticing how Elon Musk did the same after his acquisition of Twitter. The “Make America Great Again” crowd values transphobia and homophobia like no other virtue, so the best way to virtue signal1 to the incoming administration is to stand behind the systemic hatred of vulnerable people.
I wouldn’t consider Zuckerberg a right-winger; I just think he’s a nasty, good-for-nothing grifter. He’s an opportunist at heart, as perfectly illustrated by Tim Sweeney, Epic Games’ chief executive, in perhaps the best the-worst-person-you-know-made-a-great-point post I’ve ever encountered:
After years of pretending to be Democrats, Big Tech leaders are now pretending to be Republicans, in hopes of currying favor with the new administration. Beware of the scummy monopoly campaign to vilify competition law as they rip off consumers and crush competitors.
The second Washington flips to Democrats, Zuckerberg will be back on the “Zuckerbucks” train once again, standing up for democracy and human rights in name only. In truth, he only has one initiative: to make the most money possible. The Biden administration has made accomplishing that goal very difficult for poor Zuckerberg, and it hasn’t stood up for American companies after the European Union’s lawfare against Big Tech, so the latest changes to Meta’s content moderation are meant to curry favor with violent criminals in the Trump administration — including Trump, himself, a violent criminal. So, the changes aren’t about adapting to social acceptability; rather, they conform to MAGA’s most consistent viewpoint: that all gay people are subhuman and women are objects.
3/ Change how we enforce our policies to remove the vast majority of censorship mistakes by focusing our filters on tackling illegal and high-severity violations and requiring higher confidence for our filters to take action.
Word salad, noun: “a confused or unintelligible mixture of seemingly random words and phrases.”
4/ Bring back civic content. We’re getting feedback that people want to see this content again, so we’ll phase it back into Facebook, Instagram and Threads while working to keep the communities friendly and positive.
During campaigning season, Adam Mosseri, Instagram’s chief executive and head of Threads, said politics would explicitly never be promoted again on Meta’s platforms because it was inherently decisive. Threads was founded with the goal of de-emphasizing so-called “hard news” in text-based social media, much to the chagrin of its users who, for years at this point, have been begging Meta to flip the switch and stop down-ranking links and news. But now that the election is over and the new administration will begin to highlight its propaganda, Zuckerberg has a change of heart.
Again, Zuckerberg is an opportunist: If he can position Facebook — and Threads, but to a lesser extent — as another MAGA-friendly news outlet, along the likes of Truth Social and X, chances are the new administration will start to give Meta free passes along the way. During Trump’s first term, Twitter was the place to know about what was happening in Washington. Trump’s team never gave information to the “mainstream media,” as it’s known in alt-right circles, instead opting for the Twitter firehose of relatively little editorialization. If Trump tweeted something, Trump tweeted it, and that was it; case closed. Zuckerberg wants to capitalize on Trump’s affinity for text-based social media, and the re-introduction of politics (i.e., “civic content”) aims to appeal to this affinity. If he’s good enough, Trump might throw Zuckerberg a bone, choosing to give Meta some of his precious content.
5/ Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.
Meta has had fact-checkers in Texas for years, but Texas is as Republican as California is Democratic, so I don’t think the “concern” makes even a modicum of sense. Again, this is a capitulation to Trump’s camp, which perceives “woke California liberals” as out of touch with America and biased. In reality, there’s no proof that they’re any more biased than Republicans from Texas. Additionally, unless Meta is outsourcing content moderation to cattle fields in West Texas, cities in the state are as liberal — or even more liberal, as pointed out by John Gruber at Daring Fireball — as California, so this entire plan is moot. For all we know, it probably doesn’t exist at all.
I say that after reporting from Wired on Thursday claims sources in the company say “the number of employees that will have to relocate is limited.” The report also says that Meta has content moderators outside of Texas and California, too, like Washington and New York, making it clear as day that it’s just more bluff from Zuckerberg to appease the hard-core anti-California MAGA crowd.
6/ Work with President Trump to push back against foreign governments going after American companies to censor more. The US has the strongest constitutional protections for free expression in the world and the best way to defend against the trend of government overreach on censorship is with the support of the US government.
He’s not the president yet, but the last part of the final sentence makes Zuckerberg’s intentions throughout the whole thread strikingly obvious: “with the support of the U.S. government.” This entire thread is a love letter to the president-elect, who, in four days, has the power to bankrupt Meta in a matter of weeks. He controls the Federal Communications Commission, the Federal Bureau of Investigation, the Federal Trade Commission, and the Justice Department — he could just take Meta off the internet and call it a day. He could throw Zuckerberg in prison. There aren’t any checks and balances in Trump’s second term, so to do business in Trump’s America, Zuckerberg needs his blessing.
After his word salad thread on Threads, Zuckerberg did what any smooth-brained MAGA grifter would do: join Joe Rogan, the popular podcaster, on his show to discuss the changes. Adorned with a gold necklace and a terrible curly haircut, Zuckerberg bashed diversity, equity, and inclusion programs — which Meta would go on to gut entirely — defended his policy that allows Meta users to call women household objects and bully gay people and gay people only, and lamented that his company had too much “feminine energy.” And he bashed Biden administration officials for “cursing” at Meta employees to remove vaccine misinformation, but that’s the usual for Zuckerberg these days. The Rogan interview — much like Joel Kaplan, Meta’s new policy chief, going on Fox and Friends to advertise the new policy — was a premeditated move to promote the idea that hateful speech is now sanctioned on Meta platforms to the people who would be the most intrigued: misogynistic, manosphere-frequenting Generation Z and Millennial men.
The Rogan interview — which I, a Generation Z man, chose not to watch for my own sanity — is a fascinating look at Zuckerberg’s inner psyche. Here is Elizabeth Lopatto, writing for The Verge:
On the Rogan show, Zuckerberg went further in describing the fact-checking program he’d implemented: “It’s something out of like 1984.” He says the fact-checkers were “too biased,” though he doesn’t say exactly how…
Well, Zuckerberg’s out of the business of reality now. I am sympathetic to the difficulties social media platforms faced in trying to moderate during covid — where rapidly-changing information about the pandemic was difficult to keep up with and conspiracy theories ran amok. I’m just not convinced it happened the way Zuckerberg describes. Zuckerberg whines about being pushed by the Biden administration to fact-check claims: “These people from the Biden administration would call up our team, and, like, scream at them, and curse,” Zuckerberg says.
Did you record any of these phone calls?” Rogan asks.
“I don’t know,” Zuckerberg says. “I don’t think we were.”
But the biggest lie of all is a lie of omission: Zuckerberg doesn’t mention the relentless pressure conservatives have placed on the company for years — which has now clearly paid off. Zuckerberg is particularly full of shit here because Republican Rep. Jim Jordan released Zuckerberg’s internal communications which document this!
In his letter to Jordan’s committee, Zuckerberg writes, “Ultimately it was our decision whether or not to take content down.” “Like I said to our teams at the time, I feel strongly that we should not compromise our content standards due to pressure from any Administration in either direction – and we’re ready to push back if something like this happens again.”
“Ultimately it was our decision whether or not to take content down.” So, by Zuckerberg’s own admission, it was never the Biden administration that forced Meta to remove content — it was on Zuckerberg’s volition after prompting from the administration. This was backed up by the Supreme Court in Murthy v. Missouri, where the justices, back last June, said that the government simply requested offending content to be removed. Murthy v. Missouri was tried in front of the Supreme Court by qualified legal professionals, and Zuckerberg, for a sizable portion of the Rogan interview, lied through his teeth about its decision. This has already been decided by the courts! It is not a point of contention that the Biden administration did not force Meta to remove content; doing so would be a violation of Meta’s First Amendment rights.
Back to Zuckerberg’s psyche: This sly admission, like many others in the interview, is a peek into Zuckerberg’s blether. His nonsense thread is a love letter to the Trump administration written just the way Trump would: with no factual merit, long-winded rants about free speech and over-moderation, and no substantive remedies. I always like to say that if someone tells blatantly obvious lies, it’s safe to assume even the less conspicuous claims are also fibs. That, much like it does to Trump, applies perfectly to Zuckerberg — a crude, narcissistic businessman.
As I wrote earlier, Zuckerberg got his great idea after observing how Musk, the owner of X, got into Trump’s inner circle. Musk and Trump are notoriously not friends; Trump a few years ago posted about how he could have gotten Musk to “drop to your knees and beg.” Nevertheless, Musk is one of Trump’s key lieutenants in the transition, giving Zuckerberg hope that he, too, can get out of the “we’ll-throw-him-in-prison” zone. Tim Cook — Apple’s chief executive who donated $1 million to Trump’s inaugural committee and is set to attend the event January 20 — got his way with Trump in a similar fashion, posing with the then-president at a factory in Austin, Texas, where Mac Pro units were being assembled in 2019. (Those old enough to remember “Tim Apple” will recall the business-oriented bromance between Trump and Cook.) Cook is doing it again this year, making it harder for Zuckerberg to fit in amongst his biggest competition. His solution: Get Trump to hate Apple. Here is Chance Miller, reporting for 9to5Mac:
Zuckerberg has long been an outspoken critic of App Store policies and Apple’s privacy protections. In this interview with Rogan, the Meta CEO claimed that the 15-30% fees Apple charges for the App Store are a way for the company to mask slowing iPhone sales. According to Zuckerberg, Apple hasn’t “really invented anything great in a while” and is just “sitting” on the iPhone.
Zuckerberg also took issue with AirPods and the fact that Apple wouldn’t give Meta the same access to the iPhone for its Meta Ray-Ban glasses.
Zuckerberg, however, said he’s “optimistic” that Apple will “get beat by someone” sooner rather than later because “they’ve been off their game in terms of not releasing innovative things.”
Miller’s piece includes a litany of great quotes from the interview, including Zuckerberg’s seemingly never-ending aspersions about Apple Vision Pro and iMessage’s blue bubbles. In response to the article, Zuckerberg posted this gold-mine foaming-at-the-mouth reply on Threads:
The real issue is how they block developers from accessing iPhone functionality they give their own sub-par products. It would be great for people if Ray-Ban Meta glasses could connect to your phone as easily as airpods, but they won’t allow that and it makes the experience worse for everyone. They’ve blocked so many things like this over the years. Eventually it will catch up to them.
Wrong, wrong, wrong. Again, never put it past a liar to lie incessantly at every opportunity. As I wrote in my article about Meta’s interoperability requests under the European Union’s Digital Markets Act, Apple already has a developer tool for this called AccessorySetupKit, with the only catch being that the tool doesn’t allow developers to snoop on users’ connected Bluetooth devices and Wi-Fi networks, which wouldn’t be so great for Meta’s bottom line. So, for offering a tool that doesn’t allow Meta to abuse its monopoly over smart glasses and social networks to harm consumers, Apple gets hit with the “sub-par products” line. As an example, Apple’s biggest software competitor is Google, which makes Android, and Google never calls Apple products sub-par. As a businessman, calling a competitor’s product “sub-par” is just a sign of weakness.
But this weakness isn’t coincidental. Apple is facing one of the biggest antitrust lawsuits in its history, and Trump — along with Pam Bondi, his nominee for attorney general — has the power to halt it instantly the moment he takes office. If Zuckerberg can get on Trump’s good side and paint Apple as a greedy, anti-American corporation in the next few days before the transition, he hopes it can outweigh Cook’s influence on the house of cards just long enough for the case to go to trial.
And besides, Meta hasn’t invented anything other than Facebook itself two decades ago. Its largest platforms — Instagram, WhatsApp, and Meta Quest — were all acquisitions; its new text-based social media app, Threads, is a blatant one-for-one copy of Twitter’s 16-year-old idea; its large language model trails behind ChatGPT, its content moderation ideas are stolen straight from X’s playbook; and its chat apps use Signal’s encryption protocol. Meta is not an innovator and never has been one — every accusation is a confession. But, again, none of this logic is at the heart of Zuckerberg’s case or is really even relevant to analyze the brazen changes coming to Meta’s platforms.
The Rogan interview — along with the major policy changes on Meta platforms announced just about a week before Trump’s inauguration — was a strategic, calculated public relations maneuver from Zuckerberg and his tight-knit team of close advisers. He and his company have a lot to gain — and lose — from a second Trump administration, and so does his competition. But Zuckerberg, along with the wide range of tech leaders from Shou Chew of TikTok to Jensen Huang of Nvidia, understands that the best way to remain at the top for just long enough is to take down the competition and play a little game of “The Apprentice.”
In the end, all of this will be over in about a year, tops. In the Trump orbit, nothing ever lasts for too long. It really is a delicate house of cards, formed with bonds of bigotry and corporate greed. While Zuckerberg may be on Trump’s good side leading up to the inauguration, he might be bested by Musk’s X or Chew’s TikTok, both of whom are in desperation mode. Only one can win: If TikTok does, Zuckerberg is out of the tournament; if Zuckerberg wins, Musk makes the embarrassing walk back to the failure that plagued the first X.com. And if Zuckerberg wins, this country is in for a hell of a ride. Make America sane again.
Solar, Monitors, and Chatbots: The Best of the CES Show Floor
The interestingness is hiding between the booths

On Tuesday, doors to the show floor opened at the Consumer Electronics Show in Las Vegas, letting journalists and technology vendors alike explore the innovations of companies small and large. Over Tuesday and Wednesday, I tried to find as many hidden gems as I could, and I have thoughts about them all — everything from solar umbrellas to fancy monitors to new prototype electric vehicles. While Monday, as I wrote earlier, was filled with boring monotony, I enjoyed learning about the small gadgets scattered throughout the massive Las Vegas Convention Center. While many of them may never go on sale, that is mostly the point of CES — spontaneity, concepts, and intrigue.
Here are some of my favorite gadgets from the show floor over my last two days covering the conference.
Razer’s Project Arielle Gaming Chair

Razer on Tuesday showcased its latest gaming-focused prototype: a temperature-controlled chair. Razer is known for whacky, interesting concepts, such as the modular desk it unveiled a few conferences ago, but its latest is a product I didn’t know I needed in my life. Project Arielle is a standard-issue mesh gaming chair — specifically, Razer’s Fujin Pro — equipped with a heating and cooling fan system placed at the rear, near the spine. The fan pumps either hot or cool air through tubes that travel through the seat cushion and terminate at holes in the cushion, controlling the seat’s temperature.
The concept has multiple fan speeds and, in typical Razer fashion, is adorned with colorful LED lights. The prototype functions similarly to perforated car seats found in luxury vehicles, such as early Tesla Model S and X models, but connects to a wall outlet for power; it does not have a battery, meaning that if the cable is disconnected, the temperature control will no longer function.
I think the idea is quite humorous, but it does have some real-life applications in very warm or cold climates. It’s less of a gaming product as much as it is a luxurious, over-engineered seating apparatus. Because of how over-engineered the product is and how difficult it ought to be to manufacture reliably, chances are it will never see the light of day and become available for purchase. But concepts like these make CES exciting and interesting to cover.
GeForce Now Support Coming to Apple Vision Pro

Nvidia, after its jam-packed keynote on Monday night, announced in a press release that its GeForce Now game streaming platform would begin supporting Apple Vision Pro through Safari. The company said the website would begin working when an update comes “later this month,” but it is unclear how it will function since GeForce Now runs in a progressive web app, which Apple doesn’t support on visionOS. I assume the Apple Vision Pro-specific version of the website omits the PWA step, which would require some form of collaboration with Apple to ensure everything works alright.
As I have written many times before, Nvidia and Apple have a strained relationship after the 2006 MacBook Pro’s failed graphics processors. But it seems like the two companies are getting along better now since Nvidia now heavily features Apple Vision Pro in its keynotes and works with Apple on enterprise features for visionOS. I’m glad to see this progression and hope it continues, as much of the groundbreaking technology best experienced on an Apple Vision Pro is created using Nvidia processors. Still, though, it’s a shame there isn’t a visionOS-native GeForce Now app that would alleviate the pain of web apps. Apple’s new App Store rules permit game streaming services to do business on the App Store, so it isn’t a bureaucratic issue on Apple’s side that prevents a native app.
Technics’ Magnetic Fluid Drivers

Technics, Panasonic’s audio brand, announced on Tuesday a new version of its wireless earbuds with an interesting twist: drivers with an oil-like fluid inside between the driver itself and voice coil to improve bass and limit distortion. According to the company, the fluid has magnetic particles that create an “ultra-low binaural frequency,” producing bass without distortion.
This is the kind of nerdery that catches my eye at CES: Most earbuds with small drivers typically have to prioritize volume over fidelity to compensate for the minuscule apparatus that makes the noise. As volume increases, the driver reaches its capacity — the maximum or minimum frequency it can produce — quicker. The magnetic fluid drivers aim to broaden this threshold to 3 hertz from the typical 20 hertz at its lowest, therefore producing better bass with low distortion at even high volume levels.
It’s only a matter of time before reviewers evaluate Technics’ claims — the earbuds go on sale this week for $300, $50 less than Apple’s AirPods Pro, the gold standard for truly wireless earbuds. They support Google’s Fast Pair protocol for auto-switching and easy pairing, à la AirPods, have voice boost features like Voice Focus AI to improve call quality, and customize active noise-cancellation for each ear. But these features are standard for flagship earbuds — it’s the driver fluid that makes them compelling.
Movano’s Health-Focused AI Chatbot

Movano, the little-known smart ring maker, announced on Tuesday a new artificial intelligence chatbot trained specifically on medical journals to provide correct, appropriate answers to medical questions. Movano claims the chatbot, EvieAI, is only trained on 100,000 peer-reviewed journals written by medical professionals and cross-checks information with accredited medical institutions like Mayo Clinic before producing a response. The company says the chatbot answers medical queries with an astonishing 99 percent accuracy, but it did not give a demonstration to members of the press.
My first instinct upon reading Movano’s press release was that WebMD, the easy-to-understand medical answers website, has finally met its first real AI competition. I still believe that to be the case, but chances are many people are more likely to trust a website with a byline over an AI-generated answer. And all it takes is one flub for EvieAI to be entirely wiped off the market and for Movano to never be trusted with AI again because the stakes are so high in medicine. I can see the tool being helpful for summaries and those “Click for Help!” chat pop-ups on some medical websites, but I still don’t think it should be trusted.
I do think AI chatbots will eventually advance to the point of reliability, but the lack of trustworthiness isn’t due to a shortage of reliable information on the internet — it’s because chatbots don’t know what they’re saying. This is an inherent limitation of large language models, and the only way to solve it is by building a helper bot that fact-checks the main language model. Even ChatGPT isn’t that sophisticated yet, so I doubt EvieAI is. Fine-tuning the scope of available training data does give the chatbot less information to make mistakes with, but ultimately, all the model knows how to do is break down words into tokens, do some pattern matching, and convert the tokens back to prose again. Narrowing the total amount of tokens reduces the likelihood for bad tokens to be generated, but it’s still a black box.
Honda Zero

Honda on Tuesday announced two more concept vehicles to join its Honda Zero lineup of fully electric autonomous cars, first unveiled last year at CES. The two models follow in the footsteps of last year’s concepts, except Honda is more bullish on selling them, with the company stating it will begin production of the two vehicles “by 2026.” (It did not offer a concrete release timeline.)
Honda’s two new models, the Honda 0 SUV and Honda 0 Saloon, feature an unusual, strange, Cybertruck-esque design with boxy edges, flushed door handles, and no side mirrors. The Honda 0 Saloon almost is reminiscent of a Lamborghini Aventador, with a sloping hood, but appears like it’s straight from the future. Neither vehicle looks street legal, and no other specifications were provided about them or their predecessors from last year.
Honda, however, did provide some details about the cars’ operating system, which it calls Asimo OS, named after the company’s 2000s-era humanoid robot. Honda was vague about details but said Asimo will allow for personalization, Level 3 automated driving, and an AI assistant that learns from each driver’s driving habits. Honda plans to achieve Level 3 autonomy — which allows a driver to take their hands and feet off the wheel and pedals — by partnering with Helm AI as well as investing more in its own AI development to teach the system how to drive in a large variety of conditions. The company said the Level 3 driving would come to all Honda Zero models at an “affordable cost.”
I never trust this vague vaporware at CES because more often than not, it never ships. Neither of the vehicles — not the ones announced a year ago, nor the ones from this year — looks ready for a drive, and Honda gave no details on what it would do next to develop the line further. As I wrote on Monday, CES is an elaborate creative writing exercise for the world’s tech marketing executives, and Honda Zero is a shining example of that ethos.
BMW’s New AR-Focused iDrive

BMW, known for its luxury “ultimate driving machines,” announced an all-new version of its iDrive infotainment system centered around an augmented reality-powered heads-up display. Eliminating the typical instrument cluster, the company opted to project important driving information on the windshield itself, communicating directions and controls via an AR projection on the road. The typical infotainment screens still remain below the windshield, accessible for all passengers, but driver-specific information is now overlaid atop the road to limit distractions.
The new system is scheduled to appear in a sport utility vehicle later this year built on BMW’s Neue Classe architecture, which the company first announced at CES 2023. But the choice to digitize previously analog controls in a vehicle beloved by many for being tactile and sporty is certainly a bold design move — and I’m not sure I like it. The dashboard now looks too empty for my liking, missing the buttons and dials expected on a high-end vehicle. Truthfully, it looks like a Tesla, built with less luxurious materials and with no design taste. As Luke Miani, an Apple YouTuber, put it on the social media website X, “Screens kill luxury.”
I also think that while the AR directions are handy, the overall experience is more irritating and distracting than typical gauges. The speedometer should always be slightly below the windshield so that it is viewable in the periphery without occupying too much space in a driver’s field of view. The new system looks claustrophobic, almost like it has too much going on in too little space. I’ll be interested to see how it looks in a real vehicle later in the year, but for now, count me out.
Delta’s New Inflight Entertainment Screens

Delta Air Lines, at a flashy press conference at the Las Vegas Sphere Tuesday evening, announced updates to its seat-back entertainment and personalization at its 100th anniversary keynote. The company said that it would begin retrofitting existing planes with new 4K high-dynamic-range displays and a new operating system, bringing a “cloud-based in-flight entertainment system” to fliers.
Delta also announced a partnership with YouTube, bringing ad-free viewing to all SkyMiles members aboard. The company announced no other details, but it’s expected that the inflight system will include the YouTube app in retrofitted planes. The new system also supports Bluetooth, has an “advanced recommendation engine,” and allows users to enable Do Not Disturb to notify flight attendants not to disturb them.
Delta said the new planes would begin arriving later this year but had no word on updates to WI-Fi, including Starlink, which its competitor United Airlines announced late last year would be coming to its entire fleet in a few years. I still believe Starlink internet is more important than any updates to seat-back entertainment screens, as most people usually opt for viewing their own content on personal devices.
Anker’s Solar Umbrella

Anker announced and showcased on the show floor this week an umbrella made of solar panels for the beach. The umbrella, called the Solix Solar Beach Umbrella, has a new type of perovskite solar cells that are up to double as efficient as the standard silicon-based cells found in most modern solar panels, according to Anker. Perovskite cells can be optimized to absorb more blue light, which explains how Anker is achieving unprecedented efficiency.
The Solix Solar Beach Umbrella connects to the company’s EverFrost 2 Electric Cooler, which also comes equipped with outlets to charge other devices using the solar power generated by the umbrella. The umbrella charges the cooler’s two 288-watt-hour batteries at 100 watts, which can then power devices at up to 60 watts through the USB-C ports. Anker plans to ship the cooler in February and the umbrella in the summer, with the former starting at $700 and the latter’s price yet to be determined.
I’ve never seen a perovskite solar panel before, so the umbrella caught my eye for its efficiency. Typically, solar-powered outdoor gear isn’t worthwhile because it doesn’t generate as much power as connected devices use — it’s more suited for long-term solutions like a home during the day when nobody is using power and the batteries can charge. But the perovskite cells change the equation and make Anker’s product much more compelling for long beach days or even camping trips since the umbrella can be used as practically a miniature solar farm to power the company’s batteries, even in low-light conditions.
LG UltraFine 6K

LG announced over the weekend and showcased on the show floor a 6K-resolution, 32-inch monitor to compete with Apple’s Pro Display XDR. The product ought to have tight integration with macOS, similar to LG’s other UltraFine displays, which are even sold at Apple Stores alongside the Studio Display. Due to its resolution, the monitor has a perfect Retina pixel density, just like Apple’s first-party options, making it an appealing display for Mac designers and programmers.
The display is an LCD, however, and is the first to use Thunderbolt 5, which Apple’s latest MacBooks Pro with the M4 series of processors support. I assume the LCD display — which is bound to be color-accurate, like LG’s other displays — will drastically lower the cost, making it around $2,500, similar to Dell’s uglier but similarly specced offering. LG offered no other specifications, including a release date.
I assume this monitor will be a hit since it would be the third 6K, 32-inch monitor on the market — perfect for Mac customers who want perfect Retina scaling. The Pro Display XDR isn’t expected to be refreshed anytime soon, and some people want a larger-than-27-inch option, leaving only Dell’s the only option, which is less than optimal due to its design and lack of macOS integration. LG’s UltraFine displays, by comparison, turn on the moment a Mac laptop is connected or a key is pressed, just like an Apple-made display. LG’s latest monitor also looks eerily similar to the Pro Display XDR, leading me to believe it’s intended for the Mac. This is one of the most personally exciting announcements of CES this year.
Sony Honda Mobility’s Afeela

Sony first announced the Afeela electric vehicle in collaboration with Honda at CES 2023 but offered no details on pricing, availability, or specifications for two years while teasing the car’s supposed self-driving functionality and infotainment system. Now, that has changed: the venture announced final pricing for two trims as well as availability for the first units.
On Tuesday, Sony made the Afeela 1 available for reservation. The regular trim is $90,000, and the premium one is $103,000, with three years of self-driving functionality included in the price. (How generous.) Reservations are $200 and fully refundable, but interestingly, they are only limited to residents of California, which Sony says is because of the state’s “robust” EV market. The rest of the contiguous United States also has a robust EV market, and the vehicles are assembled in Ohio, which leads me to believe the limit is because Sony can’t produce enough vehicles for the whole country.
But I think that’s the least of the company’s problems. The $103,000 version is the first to ship, with availability scheduled for sometime in 2026; the more affordable $90,000 trim is scheduled for 2027. This realistically means early adopters will opt for the more expensive trim, which is truly very expensive. $100,000 can buy some amazing cars already on the market, and the Afeela has nothing to offer for the price. It is only rated for 300 miles of range, and the company provided no horsepower or acceleration numbers. It’s also unclear if the car has a Tesla charging port for use with the Supercharger network or if it’s stuck with a traditional combined charging system — commonly known as CCS — connector.
Sony provided no timeline for when the vehicle would come to the rest of the United States, which leads me to believe that the entire venture is a pump-and-dump scheme of sorts: sell under 100 vehicles only in California in 2026, cancel the 2027 version, and shut down the project by the end of the decade. That way, Sony and Honda both lose nothing, and nobody buys a car that doesn’t work. The entire deal seems incredibly unscrupulous to me, knowing the fact that the company is opening two “delivery hubs” in Fremont and Torrance, California, where interested customers will be able to take test drives. The whole thing seems like a proof of concept rather than a full-fledged vehicle.
If I were a betting man, I would say that the Afeela will never become a true competitor in the EV market — ever.
The CES show floor was certainly more exciting than the press conferences from Monday, but there’s still a lot to be uncovered. That’s not a bad thing, or even unexpected, but it’s something to be cautious of when following the news out of CES closely. I still stick to my opinion that this year’s show is one of the most boring in recent years, but that doesn’t mean everything was bad.
AI, GPUs, and TVs: A Diary From CES 2025 Day 1
Maybe CES has hit rock bottom, after all

On the first day of the Consumer Electronics Show in Las Vegas, I completed my usual routine: I tuned into the big-name press conferences, took notes, caught up on social media reactions, and repeated until the news ran out and the sun set over the valley. CES hasn’t been about consumer technology as much as it has been about vibes, thoughts, and marketing for a while, but that is the inherent appeal of the show as it stands. In a fragmented, messy media environment, it is hard to get a gist of what the people who make the technology think sticks.
People often correlate marketing with greed: that companies only market products that are the best for them, not us. That is a true but incomplete assertion because marketing executives are of low intelligence. Spending an enormous amount to advertise congestive heart failure doesn’t make it any the more appealing because people generally do not want their heart to fail. That might be a humorous and irrelevant example, but marketing executives and consumers aren’t stupid. The colloquial expression, “You can’t polish a turd,” expresses this succinctly. If something is being marketed heavily, it is almost certain that it is viewed positively amongst the target audience.
CES isn’t about heart failure or marketing strategies; it is about generative artificial intelligence for the second year in a row. The AI boom hasn’t died, and I don’t think it ever will because it’s popular amongst the marketing crowd. It is OK to quibble about the popularity of generative AI — in fact, it’s healthy. But you can’t polish a turd. Money doesn’t grow on trees — if generative AI never stuck, technology’s biggest week wouldn’t be enveloped by it in the way it has been. “Big Tech” firms know better than to waste a free week in front of the media.
As I began looking over my notes, I tried to search for a theme so I could build a lede with it. But it quickly struck me that if a $3,000 supercomputer by a processor company was the most intriguing product I saw at the world’s largest technology trade show, perhaps CES has lost its fastball. These days, CES is emotionally cumbersome to cover because of just how much it has dwindled in recent history. There is an adage in the tech journalism sphere that nothing at CES is real and that it’s all a marketing mirage for the media. But now, the problem is that CES is almost too genuine to the point where a trade show that once was known for surprise and delight turned into a sea of monotony.
Last year, at CES 2024, generative AI was relatively new, and that made it genuinely exciting. It’s correct to contradict the rosiness with a brief reminder that the number of times “AI” was uttered during each keynote was nauseating, but it isn’t like this year was any different. Silicon isn’t exciting anymore, and all the industry decided to offer for 2025 was silicon. Intel, Advanced Micro Devices, Nvidia — they’re all the same, ultimately. I bet any “analyst” reading that last sentence is now suffering from an aneurysm because it’s a gross oversimplification of the entire silicon industry, but it’s true. Silicon suffers from the same stagnation smartphones did four years ago. New neural processing cores and ray tracing have never been the bread and butter of CES.
Similarly, every smart home product felt like beating a dead horse. Matter promised to be a smart home standard that made most accessories platform-agnostic, meaning they could be used with Google Home, Apple’s HomeKit, and Amazon’s Alexa all at once. (It’s not to be confused with Thread, which is a mesh networking connectivity protocol, not a standard.) But with the influx of Matter products in recent history, it isn’t the lack of adoption that bothers me, but reliability. The platform agnosticism was only rolled out about a year ago and still is unreliable, with Jennifer Pattison Tuohy, a smart home reporter for The Verge, calling it “completely broken” in late 2023. Since then, Matter has improved, but variably.
And CES, for better or worse, always seems to have the most televisions than any trade show by far — reliably. But this year, the main attraction wasn’t new display panels or considerable improvements to picture quality, but Microsoft Copilot in LG and Samsung TVs. Again, it’s hard not to believe the industry is headed in the wrong direction. CES in its prime existed to showcase the gadgets nobody would ever buy — think rollable phones and see-through televisions. But the politics of making maximum profit per dollar spent on constructing fancy exhibitions seems to have watered down the spontaneity that once brought reporters to CES. Marketing executives aren’t stupid, but as the day went by, I kept wishing they were.
Still, I worked through the pain and my misgivings about the show to compile a list of some of my favorite finds from the first day of what I feel will become a grueling three days of press conferences going over incremental product updates. The resulting chronicle is one of incremental updates, somewhat surprising numbers, and a story of marketing and consumerism hiding between the lines.
Intel

Anyone with even a slight modicum of knowledge about the current state of the silicon industry knows Intel is in hot water. It spun off its foundry business due to dwindling profits, abruptly fired its technically minded chief executive over those dwindling profits, and has been consistently behind in every market for years. Its chief competitor, Advanced Micro Devices, is running laps around it in nearly every important benchmark; Nvidia makes its graphics processing units look like toys; and it lost its most important business partner, Apple, four years ago. Intel, by any objective measurement, is doing awfully, both morale-wise and economically. After its CES 2025 announcements — and the subsequent ones from AMD and Nvidia — its stock price fell to its lowest since the firing of Pat Gelsinger, its prior chief executive.
Yet, the company is still making moves, though perhaps in the wrong direction. On Monday, it announced a line of processors called Arrow Lake, meant to be the successor to its Raptor Lake series, announced at CES last year. The Arrow Lake processors Intel announced Monday are meant for gaming laptops from the likes of Asus, not Copilot+ productivity-oriented PCs. (Lunar Lake, Intel’s bespoke AI chip, will still be used in the latter category for the foreseeable future.)
Intel claims Arrow Lake’s gaming variants offer 5 percent better single-threaded performance and 20 percent improved multithreaded over its Raptor Lake processors from last CES, and Arrow Lake models will ship with Nvidia’s 50-series graphics cards, adding to the performance increases. Other, non-gaming-focused laptops will use the H-model processors, and Intel claims their single-threaded performance will be up to 15 percent better. Other variants, like the U-series for ultra-low power consumption, were also announced.
The 200HX series, used in gaming laptops, won’t ship in products until late in the first quarter of the year, the company says, while the 200H and 200U chips have already begun production and will be in laptops in just a few weeks.
I say Intel’s announcements are heading in the wrong direction because they don’t follow the pattern of every other hardware maker at CES. If anything, Intel should’ve one-upped its announcements by announcing a successor to Lunar Lake, its AI chip line, to compete with AMD and Nvidia, who juiced their announcements chock-full of AI hype just mere hours after Intel’s keynote address. That isn’t to say Intel’s presentation was entirely full of duds; the company also announced Panther Lake, its series of 1.8-nanometer processors using its 18A process, is shipping in the second half of 2025. But when Intel is reassuring analysts it’s not leaving the discrete GPU market and advertising a 4 percent increase in the PC market year-over-year, it’s hard to have any confidence in the company. Intel is directionless, and that became even more apparent at CES.
AMD and Dell

AMD’s keynote, similar to Intel’s, was off. For one, it didn’t bring out Dr. Lisa Su, its charismatic chief executive, to deliver the address. And it didn’t announce Radeon DNA 4, its next-generation GPU platform that powers the Radeon RX 9070, its latest GPU, onstage either, leaving it for a press release. Detractors online believe this is due to Nvidia’s announcements, while others think the lack of interesting announcements was due to Dr. Su’s absence. Instead, the CES presentation focused on its latest flagship processor, mobile chips, and new partnership with Dell.
The company announced the 9950X3D, its highest-end processor with 16 cores on the Zen 5 architecture, its latest. AMD claims it’s “the world’s best processor for gamers and creators,” with an 8 percent performance boost in games over the last-generation 7950X3D and a 15 percent increase in content creation tasks, such as video editing. But perhaps the most ambitious claim is that the processor is 10 percent faster than Intel’s latest, the Core Ultra 285K. These claims are yet to be tested, as the processor — along with its lower-end counterpart, the 12-core 9900X3D — will be available in March, but they seem respectable at first glance.
AMD spent most of its time, however, announcing its new lineup of mobile processors, called the Ryzen AI Max series. Both the Ryzen AI Max and AI Max Plus have AMD’s most powerful graphics, with up to 15 CPU cores — just like the 9950X3D, but in mobile form — 40 RDNA 3.5 compute units, and 256 gigabytes-a-second memory bandwidth. Together, AMD says the AI Max Plus beats Apple’s mid-range M4 Pro processor, announced late last year, yet probably with worse heat management and power consumption. Both Ryzen AI Max chips consume up to 120 watts of power at their peak, but AMD isn’t giving any details on thermal performance, as it most likely varies drastically between laptop models. The processors are Copilot+ PC-compliant and begin shipping in the first quarter of 2025, with the first computers being from Asus and a new HP Copilot+ mini PC, similar to Apple’s Mac mini.
Perhaps AMD’s strangest announcement at its press conference was its new partnership with Dell, a company that historically has always shipped Intel and Nvidia processors in its ever-popular laptops. To accompany the news, Dell announced it would overhaul its naming structure, ditching the XPS, Latitude, and Inspiron for three new variants: Dell, Dell Pro, and Dell Pro Max. The names are a one-to-one rip-off of Apple’s iPhone naming scheme, but it didn’t stop there — in addition to the three variants, each one has three specifications: Base, Premium, and Plus. This results in some extraordinary product names, like Dell Pro Max Plus, Dell Premium, and Dell Pro Base.

The internet has been ablaze with comedy for the past day, but seriously, these names are atrocious. Not only could Dell’s product marketing team not ideate a new branding strategy, but it chose to copy Apple’s worst naming scheme and then make it worse. Proponents of the new names say they make more sense than “Dell XPS,” where XPS originally stood for “Extreme Performance System,” but the new names just don’t logically connect. Dell Pro Base is a better product than Dell Premium, for instance. It’s a completely unintuitive, embarrassing system, destroying decades of brand familiarity with one misstep. Truth be told, it embodies the fundamental problem with CES.
Qualcomm

Qualcomm, Intel’s biggest foe, launched a new Copilot+-capable processor meant to power cheaper so-called “AI PCs” below $600. The processor, called Snapdragon X, has eight cores and a neural processing unit that performs 45 trillion operations per second, or TOPS. The processor joins the rest of Snapdragon’s Arm-based computer processor lineup; it’s now composed of the Snapdragon X, Snapdragon X Plus, and Snapdragon X Elite. The company says the processor will begin shipping in various devices from HP, Lenovo, Acer, Asus, and Dell in the first half of 2025.
The Snapdragon X will make Copilot+ PCs their cheapest yet, though Windows on Arm is still shaky, with many popular apps broken entirely or running in Compatibility Mode. Still, however, the chip will shake up the budget laptop business, putting Intel and AMD on their toes to develop cheaper Copilot+-capable processors. Currently, the only chips based on the x86 instruction set — the one used by Intel and AMD — are cost-prohibitive and flagship, which isn’t ideal for schools or corporate buyers.
The processor is built on Taiwan Semiconductor Manufacturing Company’s 4-nm process node, bringing “two times longer battery life than the competition,” according to Qualcomm. I haven’t seen any laptops at CES with the Snapdragon X chip yet, but I assume they’re coming in the next few months.
Samsung

Samsung on Monday re-announced much of what it said last year at CES: AI, AI, AI. The company is bullish on AI in the smart home, emphasizing local AI processing and connectivity between various Samsung products, including SmartThings — its smart home specification — and Galaxy devices. The story is much of the same as last year, but the difference lies in semantics: While last year’s craze was about the technology itself and generative experiences, Samsung this time seems more focused on customer satisfaction, much like Apple. Whether that vision will pan out into reality is to be determined, but it sounds appropriate for the AI skepticism climate the world appears to live in currently.
Samsung calls the initiative “Home AI” — because, of course, everything deserves a brand name — and it evoked a half-futuristic, half-dystopian future of the smart home. For one, Samsung didn’t mention Matter in the AI portion of its presentation. It did eventually, in a separate, more smart home-oriented section of the keynote, but the omission seems to allude to the fact that Matter is flaky and unprepared for generative AI. Many of the things Samsung wants to do require a deep tie-in between hardware and software. For example, one presenter gave a scenario where a Galaxy Watch sensed a person couldn’t fall asleep and automatically set the thermostat to a lower temperature. That’s more than just the smart home: it’s a services tie-in. Dystopian, yet also eerily futuristic.
Samsung also emphasized personalization in its vibes-heavy and announcement-scant conference but put the ideas in terms of AI because CES is a creative writing exercise for the world’s tech marketing professionals. (See the beginning of this article.) Voice recognition and user interface personalization stood out as key objectives of the Home AI initiative — a presenter showcased an instance where a user, with high-contrast mode enabled on their smartphone, spoke to their dryer, which recognized their voice and automatically activated its own high-contrast accessibility settings. Whether that fits the new-age definition of “AI” is debatable, but it’s a perfect example of the Home AI initiative.
In a similar vein, Samsung finally announced a release date for its Ballie AI robot, which for years has promised a personalized AI future in the form of an adorable spherical floor robot with a built-in projector and speakers. Ballie was first demonstrated five years ago at CES 2020, but Samsung updated it at 2023’s show before even releasing the first generation. Now, Ballie is powered by generative AI — because of course it is — but retains much of the same feature set. Think of it as a friendlier, smaller version of Amazon’s Astro, a 2021-era robot that ran Alexa and cost an eye-watering $1,600. Ballie, like Astro, has a camera for home security but runs on SmartThings, allowing users to toggle other parts of their smart home via the robot. Ballie is shipping in the first half of the year, according to Samsung, but the company provided no concrete release date, price, or specifications.
Samsung also announced the successor to the company’s popular The Frame television: The Frame Pro. The Frame, for years, has been regarded as one of the most aesthetically pleasing televisions, not in terms of picture quality, but when it is turned off. The Frame can cycle through art and images and comes in a variety of finishes to complement a space, almost as if it’s an art installation rather than a TV. But The Frame has been plagued by software features, has mediocre image fidelity — it only has a quantum dot LED whereas most other TVs in its price range have organic-LED displays — and doesn’t get as bright as other LED TVs Samsung sells because of the anti-reflective coating, which helps display art more naturally.

The Frame Pro, by contrast, aims to address some of these issues. It now features a nerfed mini-LED display, which provides a boost in contrast and brightness since it splits the display panel into multiple local dimming zones. This way, only one part of the television can receive light while the other parts are completely off. The catch is that The Frame Pro’s display isn’t a true mini-LED panel, where the zones are spread throughout the display. (Every MacBook Pro post-2021 has a mini-LED display; to test it, go to a dark room, open a dark background with a white dot in the center of the screen, and observe the visible blooming behind that dot. That’s mini-LED’s dimming zones in action.) Instead, The Frame Pro has these dimming zones at the bottom of the screen, controlling the brightness vertically instead of in a grid pattern across the display.
I am sure this will provide some tangible difference knowing how bad the picture quality of the original Frame is in comparison to other high-end televisions, but I don’t think it will fully alleviate the pain of the matte display, which causes considerable color distortion and results in a washed-out picture. The Frame Pro also has a 144-hertz refresh rate, but because of Samsung’s abominable stubbornness to support Dolby Vision, it only has HDR10+, Samsung’s proprietary high dynamic range standard. Modern set-top boxes like the Apple TV support it, but content is scarce and not nearly as well-mastered as Dolby Vision. Really, The Frame Pro is still a compromise, and without a price, I’m unsure if the new features make it a better value over the equally compromised Frame.
Samsung’s announcements, while repetitive, were a good breath of fresh air after a packed morning full of processor updates. But none of its new products, unlike some other CES presenters’, has release dates, prices, or even concrete feature concepts. The entire address was one large, lofty, vibes-based presentation. I guess that fits the CES theme.
LG and Microsoft Copilot

LG began its announcements on Sunday, launching its 2025 television lineup infused with AI. But unlike tradition would call it, the AI wasn’t image-focused. There were modest improvements to AI Picture Pro and AI Sound Pro, but for the most part, it centered around Microsoft Copilot coming to webOS, with LG even going as far as to reprogram the microphone button to launch the AI assistant. A chatbot is built into the operating system, too, and the remote is now dubbed the “AI Remote.” (It’s worth noting Samsung is also adding Copilot to its TVs as well, though much less conspicuously.)
LG hasn’t detailed the Copilot integration yet, without even going as far as to add a screenshot to its press release — all the company has said is that the functionality is coming to the latest version of webOS with the new line of TVs, but with no release date. It’s unclear what Microsoft’s OpenAI-powered chatbot would do, but LG’s own bot would take the lead for most queries, with Copilot being used to look up additional information, says the company. Again, I’m unsure and skeptical about what “information” refers to, but that’s par for the course at CES.
It all circles back around to my lede, nearly 3,500 words ago: CES is an elaborate marketing exercise; sometimes, it delivers hits and otherwise duds. But there’s clearly some kind of pent-up demand for such a product, so much that both Samsung and LG partnered with Microsoft — which hasn’t created anything remotely close to television software in its entire corporate history — to integrate an AI chatbot within webOS and Tizen. It really is unclear what that pent-up demand entails, but what makes this year’s CES so odd is that the companies presenting this year don’t seem to be eager to showcase their latest technology freely. Intel, AMD, and Samsung have all disappointed with their announcements this year.
Either way, color me hesitant to welcome Copilot on my TV anytime soon.
TCL

TCL kept its announcements to a minimum at CES this year, launching a new Android phone called the TCL 60 XE that can switch between a full-color and e-ink-like display with just the flick of a switch at the back of the device. The feature is called Max Ink Mode, and it uses TCL’s Nxtpaper display technology to toggle between the two modes. Nxtpaper isn’t an e-ink display, but it mimics the functionality of e-ink through a standard LCD. The LCD has a reflective layer that eliminates backlight glare and diffuses light, thereby faking the matte, dull e-ink look without rearranging pigment particles using electricity. Because Nxtpaper is just a special LCD, it still operates like a normal screen until the switch is flipped, which changes the appearance of Android.
The TCL 60 XE, otherwise, is a typical Android budget phone, with a 50-megapixel rear camera, 6.8-inch display, and “all-day battery life.” No other specifications were given, but the product is promised to begin shipping in Canada by May and in the United States later this year. (It is exclusive to North America.)
TCL also announced a new projector, called the Playcube, which is an adorable cube-shaped modular device. No other details were provided, however, probably because it is most likely just a concept. But the Nxtpaper 11 Plus, the company’s next-generation tablet, did get more specifications: it features an 11.5-inch display built on Nxtpaper 4.0 and a 120-hertz refresh rate. Nxtpaper 4.0, according to TCL, uses improved diffusion layers to offer better sharpness and brightness. However, no pricing information and release date were issued by TCL in its press release.
TCL is always a vendor I enjoy hearing from at CES, mostly because it doesn’t have the bandwidth to put on its own extravagant events. While typical for the company, the Max Ink Mode really was intriguing to look at. TCL, however, didn’t introduce its full TV line at CES this year, which is atypical for a company that always seems to offer the largest screens at some of the lowest prices. It did preview a mini-LED one, however, but provided no other specifications or pricing.
Matter and the Smart Home
CES typically brings a plethora of smart home devices, and in recent years, it has become a breeding ground for Matter and Thread appliances. But as I said earlier, Matter continues to be an unreliable standard for most important smart home accessories, with frequent bugs and connectivity issues plaguing the experience. Still, though, this CES has been high on hardware products and less focused on the Matter protocol itself, unlike the last few years. Here are some of the gadgets and announcements I found most intriguing.
Ecobee launched a cheaper smart thermostat to join its lineup of what I think are the best HomeKit-compatible thermostats, alongside the Matter-enabled second-generation Nest Learning Thermostat. The new one, which costs $130, has all the smart features of the premium models but lacks a few bells and whistles, such as the air quality sensor. It can be paired with Ecobee’s SmartSensors, sold separately, but doesn’t support Matter, which Ecobee promised to do in 2023. (It still supports Google Home, Amazon Alexa, HomeKit, and Samsung SmartThings, so take Matter’s omission with a grain of salt.) I think it’s the best smart thermostat for beginners just getting acquainted with a smart home.
HDMI 2.2 brings 4K resolution at 480-hertz with 96 gigabits per second of bandwidth. The new protocol, developed by the HDMI Forum and called Ultra96 HDMI, also includes a latency indication specification to allow connected devices to communicate with each other and compensate for lag. The HDMI Forum intends for it to mainly be used for audio receivers and says that it performs better than HDMI-CEC, which enables the same cross-device communication in the current HDMI 2.1 specification. HDMI 2.2 cables will begin shipping later this year.
Schlage, the renowned door lock maker, announced a new ultra-wideband-powered smart lock with a twist. While some smart locks use Bluetooth Low Energy and near-field communication to communicate — such as Schlage’s own Encode Plus lock, which works with Apple’s home key — Schlage’s latest, the Sense Pro, uses the ultra-wideband chip in certain smartphones to detect when a user is nearing their door lock and automatically unlock it for them. This is possible due to ultra-wideband precision; the technology is used in Apple’s Precision Finding feature, proving its reliability. I don’t think pulling out my phone and holding it against my door is very cumbersome, but this could potentially be useful when my hands are full. The company says the Sense Pro will be available in the spring.

Aqara is launching a 7-inch wall-mounted tablet and home hub combo it calls the Panel Hub S1 in addition to the Touchscreen Dial V1 and Touchscreen Switch S100, three unintuitive names for products that aim to act as souped-up light switches. The devices can be installed in lieu of light switches to control smart home devices connected via a home’s local Thread and Matter networks. This is the promise of Matter: interoperability so that any device can tie into a smart home ecosystem without connecting to one of the big three platforms. Each device features a touchscreen, but the Panel Hub S1 has the largest. It reminds me of Apple’s rumored HomePod with a screen, except perhaps much cheaper. The Dial V1 has a scroll wheel to control devices, and the Touchscreen Switch occupies the space of one switch with a screen for more details. All three products are shipping in the first quarter of the year.

Google announced Gemini is coming to third-party TVs via Google TV, the company’s smart TV software that certain TV manufacturers like Hisense pre-install on their devices. Gemini previously was confined to the Google TV Streamer, Google’s latest set-top box that replaced the Chromecast to much chagrin last year, but now the company is bringing it to all Google TV-enabled televisions. I think this makes more sense than Copilot because Google TV in and of itself is a streaming platform with its own recommendation engine, so Gemini could answer questions about certain items or recommend what to watch.
The Star of the Show: Nvidia

Nvidia’s Monday evening presentation was perhaps the most exciting, hotly anticipated event of the day. The keynote attracted attention like I have never seen in recent CES history, with nearly 100,000 people tuning in on the live stream and 14,000 attending in Las Vegas — 2,000 above the capacity limit of the arena. Nvidia, after the launch of ChatGPT and its subsequent competitors, quickly rose to become the most valuable technology company due to its GPUs used for AI training. At CES, the company announced its latest gaming GPU line, the RTX 50-series, as well as other AI-focused processors.
The RTX 50-series GPUs are powered by Nvidia’s Blackwell processor architecture. The new highest-end card, the RTX 5090, can perform up to 4,000 trillion operations per second, 380 ray tracking tera floating point operations per second (10 to the 12th power), and has a 1.8 terabytes-per-second memory bandwidth. The company claims the 5090 is two times faster than its predecessor, the RTX 4090, in gaming tasks thanks to so-called tensor cores — components of the card reserved for AI processing — and the next generation of Nvidia’s deep learning super sampling, or DLSS, AI-powered upscaling.
But perhaps the more awe-inspiring part of the keynote is when Jensen Huang, Nvidia’s chief executive, said the RTX 5070 — currently the lowest-end card in the lineup — matches the RTX 4090’s performance in most tasks. For context, the 4090 is currently the most performant graphics processor in the world and takes up an enormous amount of volume in a computer case, but if Nvidia is to be believed, the lowest-end, smallest card in its flagship lineup now matches its performance. That’s bananas.
Nvidia announced pricing for the new cards, too: $2,000 for the RTX 5090, $1,000 for the 5080, $750 for the 5070 Ti — a slightly upgraded version of the 5070 — and a mind-boggling $550 for the 5070. The highest-end 4090 from last year cost $1,500, meaning new buyers can save $1,000 and get an equally performant card. This feat has even made Huang claim that his company’s processors are defying Moore’s Law, a concept in computer science that states the number of transistors in a processor doubles every two years. I am unsure if such a bold claim is true, but either way, Nvidia’s latest processors are incredible, and Huang mentioned many times during the keynote that it wouldn’t be possible without AI, which now does all the heavy lifting in upscaling.
The company also announced a plethora of large language and video models designed to generate synthetic training data for new, smaller models. The language models are based on Meta’s Llama 3.1 and are called the “Llama Nemoneutron Language Foundation Models,” and they are fine-tuned for enterprise use and generating training data. Nvidia calls the video model Cosmos, and it says it is the first AI model that “understands the real world,” including textures, light, gravity, and object permanence. (Nvidia Cosmos was trained on 20 million hours of video to achieve this, but I wonder where the video came from.) Both models aim to help Nvidia achieve infinite AI scaling by feeding smaller models data generated by the advanced ones. For instance, Huang said Nvidia Cosmos could simulate “millions of hours on the road” with “just a few thousand miles” to feed a self-driving computer because not every simulation can be created in the real world.
This composed the overarching theme of Nvidia’s presentation: scrape the entirety of human knowledge and use it to generate more. But I have always thought of this strategy like AI inbreeding, as crude as that may sound. If the quality of training data is poor, the output also will be, and the vicious cycle continues until the result is nonsensical. Each pass through a model adds distortion — it’s like children playing a game of telephone. But Huang says that this is the reason AI has no wall — whether he and his company should be believed is only a test of time. But while Nvidia Cosmos and the Nemoneutron LLMs are available for public use — and open-source on GitHub — they are aimed at enterprise customers to run on Nvidia processors to develop their own models.
To create these models, Nvidia needed a lot of compute power, so it built a new supercomputer architecture called Grace Blackwell, powered by “the most powerful chip in the world,” according to Nvidia. The processor, which has 130 trillion transistors, is not intended for purchase, but Nvidia scaled down the architecture to Grace Blackwell to a Mac mini-sized $3,000 supercomputer available for consumers. The supercomputer, called Project Digits, is the “world’s smallest AI supercomputer,” according to Nvidia, and is capable of running 200 billion-parameter models. The computer is powered by the GB10 Superchip and features 128 GB of unified memory, 20 efficiency cores, and up to 4 TB of storage, together achieving one petaflop of performance.
The announcement of Project Digits and Grace Blackwell was probably the most exciting part of Monday at CES. The promise of a personal supercomputer has always been elusive, and this time, it genuinely appears as if it will be available soon. Nvidia says Project Digits will be available for purchase in May, and the RTX 50-series in the first half of 2025.
The first day of CES is always packed, but this year’s conference felt off. Much of it felt like a rehashing of last year’s show. Perhaps that’s much me, but the vibes are underwhelming.
About Meta’s Outrageous Apple DMA Interoperability Requests
Don’t pretend this is about choice
Foo Yun Chee, reporting for Reuters:
Apple on Wednesday hit out at Meta Platforms, saying its numerous requests to access the iPhone maker’s software tools for its devices could impact users’ privacy and security, underscoring the intense rivalry between the two tech giants.
Under the European Union’s landmark Digital Markets Act that took effect last year, Apple must allow rivals and app developers to inter-operate with its own services or risk a fine of as much as 10% of its global annual turnover.
Meta has made 15 interoperability requests thus far, more than any other company, for potentially far-reaching access to Apple’s technology stack, the latter said in a report.
“In many cases, Meta is seeking to alter functionality in a way that raises concerns about the privacy and security of users, and that appears to be completely unrelated to the actual use of Meta external devices, such as Meta smart glasses and Meta Quests,” Apple said.
Meta hasn’t released these interoperability requests itself, leaving the onus on Apple to truthfully represent Meta’s interests, but Andrew Bosworth, Meta’s chief technology officer, alluded to what they might be about on Threads:
If you paid for an iPhone you should be annoyed that Apple won’t give you the power to decide what accessories you use with it! You paid a lot of money for that computer and it could be doing so much more for you but they handicap it to preference their own accessories (which are not always the best!). All we are asking for is the opportunity for consumers to choose how best to use their own devices.
It’s obvious that Meta wants its iOS apps to interact with Meta Quests and glasses (“accessories”) better and more intuitively. But let’s look at the list of features Meta asked for through interoperability requests, as written in Apple’s white paper titled “It’s getting personal”1 as a response to the European Commission, the European Union’s executive agency:
- AirPlay
- App Intents
- Apple Notification Center Service, which is used to allow connected Bluetooth Low Energy devices to receive and display notifications from a user’s iPhone
- CarPlay
- “Connectivity to all of a user’s Apple devices”
- Continuity Camera
- “Devices connected with Bluetooth”
- iPhone Mirroring
- “Messaging”
- “Wi-Fi networks and properties”
Apple puts the list quite bluntly in the white paper:
If Apple were to have to grant all of these requests, Facebook, Instagram, and WhatsApp could enable Meta to read on a user’s device all of their messages and emails, see every phone call they make or receive, track every app that they use, scan all of their photos, look at their files and calendar events, log all of their passwords, and more. This is data that Apple itself has chosen not to access in order to provide the strongest possible protection to users.
Third-party developers can accomplish most of what they want from these iOS features with the application programming interfaces Apple already provides. They can use AirPlay to cast content from their apps to nearby supported televisions, use App Intents to power widgets and shortcuts, use APCS to display notifications from a user’s iPhone on a connected device, make apps for CarPlay, use Continuity Camera in their own Mac apps, view devices connected via Bluetooth, send messages with embedding logging using the UIActivityViewController
API, and view details of nearby Wi-Fi networks. All of this is already available within iOS with ample developer and design documentation.
For instance, if Meta wanted to create an easy way to set up a new pair of Meta Ray-Ban glasses, it could use the new-in-iOS-18 API called AccessorySetupKit, demonstrated at this year’s Worldwide Developers Conference to display a native sheet with quick access to Bluetooth, near-field communication, and Wi-Fi. There’s no need to get access to a user’s connected Bluetooth devices or Wi-Fi networks — it’s all done with one privacy-preserving API. As Apple puts it in its developer documentation:
Use the AccessorySetupKit framework to simplify discovery and configuration of Bluetooth or Wi-Fi accessories. This allows the person using your app to use these devices without granting overly-broad Bluetooth or Wi-Fi access.
From this Apple-presented feature interoperability list, I can’t think of much Meta would want that isn’t already available. The only features I can reasonably understand are iPhone Mirroring and Continuity Camera, but those are Apple features made for Apple products. Meta could absolutely build a Continuity Camera-like app that beamed a low-latency video feed from a connected iPhone to a Meta Quest headset, as Camo did for Apple Vision Pro. That’s a third-party app made with the APIs Apple provides today, and it works flawlessly. Similarly, a third-party iPhone Mirroring app called Bezel on visionOS and macOS works like a charm and has for years before Apple natively supported controlling an iPhone via a Mac. These apps aren’t new and work using Apple’s existing APIs.
Meta’s interoperability requests are designed as power grabs, much like the DMA is for the European Commission. At first, it’s confusing to laypeople why Meta and Apple feud so often, but the answer isn’t so complicated: Meta (née Facebook) missed the mobile revolution when it happened in 2009, was caught flat-footed when social media blew up on the smartphone, and suddenly found itself making most of its money on another company’s platform. Mark Zuckerberg, Meta’s founder, isn’t one to play anything but a home game, so instead of working with Apple, he actively worked against it for the last decade. Facebook changed its name to Meta in 2021 to emphasize its “metaverse” project — now an artifact of the past replaced by artificial intelligence — because it didn’t want to play on another company’s turf anymore.
Now, Meta as an organization has a gargantuan task: to transition from a decade-long away game to a home game. This transition perfectly coincided with the launch of App Tracking Transparency and Apple Vision Pro, two thorns in Meta’s side that further complicate what’s already a daunting feat. If Meta wants to play its own game, to have its own cake and eat it too, it needs to make its own hardware and software — and to transition from Apple hardware and software to its own, it needs Apple’s cooperation and favor, which it hasn’t ever curried in its existence. Meta knows there’s no chance these interoperability requests will ever be approved, and it knows the DMA isn’t on its side, but it’s filing them anyway to elicit this response from Apple. I’m honestly surprised Meta decided to slyly provide a cheap-shot statement to Reuters instead of cooking up its own blog post written by Zuckerberg himself to turn this into an all-out war.
The default response from any company ready to pick a fight with Apple is always that Cupertino cites privacy as a means to justify anticompetitive behavior. Apple has had enough of this, as evidenced by this passage in its white paper:
But the end result could be that companies like Meta — which has been fined by regulators time and again for privacy violations — gains unfettered access to users’ devices and their most personal data.
Scathingly bitter. Grammatically incorrect (“companies like Meta… gains”) — the team writing this really could’ve used Apple Intelligence’s Proofread feature — but scathing.
Anyone who has talked to a layperson about Meta’s products in the last few years knows that they’re all concerned about Meta snooping on their lives. “Why are my ads so strangely specific? I just searched that up.” “I hear Meta doesn’t care about my privacy.” “Instagram is listening to my conversations through my microphone.” Generally, however, most people think of Apple as privacy-conscious, so much so that they store their secrets in Apple Notes, knowing that nobody will ever be able to read them. No amount of marketing or conditioning can achieve this — Meta is indisputably known as a sleazy company whereas Apple is trusted and coveted. (This is also why it’s an even bigger deal when Apple Intelligence summarizes and prioritizes scam text messages and emails.)
Meta, Spotify, and Epic Games — Apple’s three largest antitrust antagonists — love to talk big game about how people are dissatisfied by how much control Apple exerts over their phones, but I’ve only ever heard the opposite from real people. When I explain that Apple blocks camera and microphone access to all apps when the device is asleep, they breathe a sigh of relief. Apple’s got my back. Nobody but the nerdiest of nerds on the internet ever complains that their iPhone is too locked down — most people are more wary of spam, scams, and snooping. For the vast majority of iPhone users, the primary concern isn’t that their phone is too locked down, but not locked down enough.
Meta has never built a reputation for caring about people’s privacy, so it never understood how important that is to end users. Most people aren’t hackers living in “The Matrix” — they just don’t want to feel like they’re passing through a war zone of privacy-invading bombs whenever they check Instagram. There is and always will be a good argument for reducing Apple’s control over iOS, but whatever Meta’s advocating for here isn’t that argument. Where I’m willing to cede some ground is when it comes to apps Apple purposefully disallows due to their payment structure or content. I think Xbox Game Pass should be on the iPhone, and so should clipboard managers and terminals. If Apple doesn’t want to host these apps, let registered developers sign them without downloading third-party app signing tools. This is uncontroversial — what isn’t is giving a corporation known for disregarding privacy as even a concept unfettered access to people’s personal information.
The issue isn’t choice as Meta apologists proclaim it to be, evidenced by Meta’s very anticompetitive, anti-choice smear campaign in 2021 against App Tracking Transparency. “Let us show permission prompts” is a nonsense request from a company that took out full-page newspaper ads just a few years ago against the very idea of permission prompts. Meta isn’t serious about protecting privacy or letting people choose to share their information with Zuckerberg’s data coffers, but it is serious about turning iOS into an “open” web that benefits the interests of multi-billion dollar corporations. No person with a functioning brain would believe Meta — whose founder said it needed to “inflict pain on Apple” — is now interested in developing features with Apple via interoperability requests. The fact that the European Union even entertains this circus is baffling to me.
-
A question on Bluesky from Jane Manchun Wong, one of the best security researchers, led me on a quest to find where this white paper came from. I found it via Nick Heer on Mastodon, who told me it came from Bloomberg. I have no idea who Apple sent it to originally, but it isn’t posted on its newsroom or developer blog, which is odd. ↩︎
A 20-Inch iPad is Completely Unnecessary
Mark Gurman, reporting for Bloomberg:
Apple designers are developing something akin to a giant iPad that unfolds into the size of two iPad Pros side-by-side. The Cupertino, California-based company has been honing the product for a couple of years now and is aiming to bring something to market around 2028, I’m told…
It’s not yet clear what operating system the Apple computer will run, but my guess is that it will be iPadOS or a variant of it. I don’t believe it will be a true iPad-Mac hybrid, but the device will have elements of both. By the time 2028 rolls around, iPadOS should be advanced enough to run macOS apps, but it also makes sense to support iPad accessories like the Apple Pencil.
It is my impression that much of Apple’s current work on foldable screen technology is focused on this higher-end device, but it’s also been exploring the idea of a foldable iPhone. In that area, Apple is the only major smartphone provider without a foldable option: Samsung, Alphabet Inc.’s Google, and Chinese brands like Huawei Technologies Co. all have their own versions. But I wouldn’t anticipate a foldable iPhone before 2026 at the earliest.
Two 11-inch iPad Pro side-by-side would amount to a 22-inch display, diagonally measured, and Gurman says the device will be closer to 20 inches in size. Either way, a 20-inch device is almost unfathomably massive: just ask anyone with a 16-inch laptop. Even Apple’s large MacBook Pros are too unwieldy to my taste, but 20 inches is too large for any productive use. Here’s my line of thought: Try to think of something that can’t be done with a 13-inch iPad Pro but that can be on one 7 inches larger — it’s impossible. The only real use case I can think of is drawing and other art, but drawing pads larger than 20 inches are usually laid out on large art tables or easels. A 20-inch iPad wouldn’t even be able to fully expand on an airplane tray table, where people are more likely to want a small, foldable, probable device.
Rumors of a large foldable iPad have been floating around for years now, but the expectation was always that it would work as a Mac laptop, with the bottom portion of the tablet functioning as a keyboard when positioned like a laptop. That also didn’t make much sense to me, but Gurman’s idea that the device would only run iPadOS is perhaps even more perplexing. Even if we (remarkably) assume iPadOS becomes the productivity operating system of champions in a few years, a 20-inch iPad seems over-engineered. iPad apps can only occupy so much space because, ultimately, they’re sized-up iPhone apps with desktop-class interface elements — to some extent. Again, there’s nothing someone can’t do with a 13-inch iPad Pro that suddenly would become possible with a larger model.
So that brings the conversation to a head: What apps will this folding iPad run? Gurman writes that the answer is Mac apps, and the first time I read his passage, I audibly let out a giggle. That’s nonsense. I’m supposed to believe Apple’s operating system teams are working on a way to run AppKit code on the iPad without optimizing the Mac’s user interface idioms for a touchscreen? How does that even remotely make any sense? I’m cautious about discounting Gurman’s reporting, as when I have, I’ve been wrong repeatedly. In Gurman we trust. But the way Gurman writes this sentence — specifically his usage of the word “should” — leads me to believe this is some speculation on his part.
Apple knows Mac apps can’t run on iPadOS — it knows this so well that it disables touchscreen support in Sidecar, the Mac mirroring feature on the iPad introduced a few years ago. The only way to interact with a Mac from an iPad in Sidecar is via the Apple Pencil because that’s a precise tool akin to a mouse cursor. Conversely, iPad apps can run on the Mac because it’s only a minor inconvenience to move the mouse cursor a few more pixels than usual to hit iPad-sized touch targets. On the iPad, running Mac apps is an impossibility; on the Mac, running iPad apps is a mere inconvenience. Apple can build a way to run Universal-compiled Mac apps on the iPad — it successfully jury-rigged a way to run UIKit, Arm-based apps on Intel Macs with Project Marzipan Mac Catalyst — but it cannot automatically resize UI touch targets to fit a 20-inch iPad. The problem doesn’t lie in iPadOS’ lack of technological advancement.
Alternatively, Gurman is wrong about what OS this product runs. This could mean one of two things: it runs an entirely new OS, or it runs macOS. I think neither of these options is likely; Gurman is probably right that it’ll run iPadOS, knowing Apple. I don’t have evidence to support that conclusion, but from years of studying Cupertinoese, it’s just the Apple thing to do. If it ain’t broke, don’t fix it. I just don’t think this new flavor of iPadOS will run Mac apps or be enticing at all to customers. Mull over that thought for a bit: When has iPadOS’ limitations ever stemmed from hardware? Since the 2018 iPad Pro redux, never. Twenty inches, 30 inches, however many inches — it doesn’t solve the problem, and it won’t sell more iPads. Even if Apple added full-blown AppKit Mac app support to the iPad — which will never happen, mark my words — the best way to experience Mac apps at close to 20 inches is a 16-inch MacBook Pro or, to sacrifice portability for size, a Studio Display.
So all we’re left asking is if this really is a folding Mac laptop, and I call that entire thought-chain nonsense. It’s time to put that rumor to rest. Apple makes the best laptops in the world, with tactile premium trackpads, great keyboards, and beautiful, large screens. Why would it trade all of that for a touchscreen? Pause the thought train: I’m not pompously proclaiming Apple won’t make a 20-inch foldable. I think it will and I think it’ll be a 20-inch iPad running the same boring, useless flavor of iPadOS we have today. But it’s not going to run macOS, a hybrid between macOS and iPadOS, or even Mac apps on iPad software. This is the larger iPad “Studio” that’s been rumored intermittently for years, and frankly, it has no purpose.
The good news is, there’s a new Magic Mouse in the works. I’m told that Apple’s design team has been prototyping versions of the accessory in recent months, aiming to devise something that better fits the modern era… Apple is looking to create something that’s more relevant, while also fixing longstanding complaints — yes, including the charging port issue.
Innovation.
Google Launches the Terribly-Named Gemini 2 Flash LLM
Abner Li, reporting for 9to5Google:
Just over a year after version 1.0, Google today announced Gemini 2.0 as its “new AI model for the agentic era.”
CEO Sundar Pichai summarizes it as such: “If Gemini 1.0 was about organizing and understanding information, Gemini 2.0 is about making it much more useful.” For Google, agents are systems that get something done on your behalf by being able to reason, plan, and have memory.
The first model available is Gemini 2.0 Flash, which notably “outperforms 1.5 Pro on key benchmarks” — across code, factuality, math, reasoning, and more — at twice the speed.
It supports multimodal output like “natively generated images mixed with text” for “conversational, multi-turn editing,” and multilingual audio that developers can customize (voices, languages, and accents). Finally, it can natively call tools like Google Search (for more factual answers) and code execution.
To even begin to understand this article, it’s important to recall the Gemini model hierarchy:
-
The highest-end model is presumably, for now, still Gemini 1.0 Ultra. There isn’t a Gemini 1.5 version of this model — 1.5 was introduced in February — but it’s still the most powerful one according to Google’s blog post from then. The catch is that I can’t find a place to use it; it’s not available with a Gemini Advanced subscription or the application programming interface.
-
Gemini 2.0 Flash is the latest experimental model, and it outperforms all other publicly available Gemini models, according to Google. It doesn’t require a subscription for now.
-
Gemini 1.5 Pro was the second-best model, only to 1.0 Ultra, up until Wednesday morning. It’s available to Gemini Advanced users.
-
Gemini 1.5 Flash is the free Gemini model used in Google’s artificial intelligence search overviews.
-
Gemini 1.5 Nano is used on-device on Pixel devices.
I assume a Gemini 2.0 Pro model will come in January when 2.0 Flash comes out of beta, but Google could always call it something different. Either way, Gemini 2.0 is markedly better than the previous versions of Gemini, which underperformed the competition by a long shot. ChatGPT 4o and Claude 3.5 Haiku continue to be the best models for most tasks, including writing both code and prose, but Gemini 2.0 is better at knowledge questions than Claude because it has access to the web. Truth be told, the large language model rankings I posted Tuesday night are pretty messy after the launch of Google’s latest model: I still think Claude is better than Gemini, but not by much and only in some cases. Neither is as good as ChatGPT, though, which is the most reliable and accurate.
No subscription is necessary to use 2.0 Flash, but whenever 2.0 Pro comes out, requiring a subscription, I feel like it’ll fare better than Claude’s 3.5 Sonnet, which is the higher-end model that sometimes does worse than the free version. I subscribed anyway, but I don’t know if I’ll continue paying because Gemini doesn’t have a Mac app — not even a bad web app like Claude1. Still, I’m forcing myself to use it over Claude, which I’ve used for free as a backup to my paid ChatGPT subscription when OpenAI inevitably fails me. Gemini does have an iOS app, though, and I think it’s better than Claude’s. (I admittedly don’t use any chatbot but ChatGPT on iOS.) The real reason I paid for Gemini Advanced is Deep Research:
First previewed at the end of Made by Google 2024 in August, you ask Gemini a research question and it will create a multi-step plan. You will be able to revise that plan, like adding more aspects to look into.
Once approved and “Start research” is clicked, Gemini will be “searching [the web], finding interesting pieces of information and then starting a new search based on what it’s learned. It repeats this process multiple times.” Throughout the process, Gemini “continuously refines its analysis.”
I admittedly don’t do a lot of deep research in my life, but I think this will be a much better version of Perplexity, which I begrudge using after its chief executive discounted the work of journalists on the web. (Typical Silicon Valley grifters.) It’s interesting to see Google use Gemini 1.5 Pro for this agentic work after touting 2.0 Flash as a “new AI model for the agentic era.” Why not introduce the new feature with the new model? Typical Google. Qualms aside, I like it, and I’ll try to use it whenever I can over regular Google Search, which continues to decline significantly in quality. It really does feel like Google is internally snatching people from the Search department and moving them over to Gemini.
Project Mariner is the last main initiative Google announced on Wednesday, and it reminds me of Anthropic’s demonstration a few months ago:
Meanwhile, Project Mariner is an agent that can browse and navigate (type, scroll, or click) the web to perform a broader task specified by the user. Specifically, it can “understand and reason across information in your browser screen, including pixels and web elements like text, code, images and forms.”
This is vaporware at its finest. A general rule of thumb when assessing Google products is whenever it prepends “Project” to anything, it’ll never ship. And neither do I want it to ship, either, because the best way to interact with third-party tools is not by clicking around on a computer but by using APIs. Google uses a bunch of private APIs born from deals with the most important web-based companies, like Expedia, Amazon, and Uber — if there’s a company with leverage to build an agentic version of Gemini, it’s Google, which basically owns the web and most of its traffic. Nobody needs fancy mouse cursors — that’s an idea for The Browser Company.
-
I’ve created a Safari web app for it on my Mac, and even that is better than Anthropic’s garbage. ↩︎
You’re Next, Qualcomm
Mark Gurman, leaking the timeline for Apple’s custom modems at Bloomberg:
Apple Inc. is preparing to finally bring one of its most ambitious projects to market: a series of cellular modem chips that will replace components from longtime partner — and adversary — Qualcomm Inc.
More than half a decade in the making, Apple’s in-house modem system will debut next spring, according to people familiar with the matter. The technology is slated to be part of the iPhone SE, the company’s entry-level smartphone, which will be updated next year for the first time since 2022…
For now, the modem won’t be used in Apple’s higher-end products. It’s set to come to a new mid-tier iPhone later next year, code-named D23, that features a far-thinner design than current models. The chip will also start rolling out as early as 2025 in Apple’s lower-end iPads…
In 2026, Apple looks to get closer to Qualcomm’s capabilities with its second-generation modem, which will start appearing in higher-end products. This chip, Ganymede, is expected to go into the iPhone 18 line that year, as well as upscale iPads by 2027…
In 2027, Apple aims to roll out its third modem, code-named Prometheus. The company hopes to top Qualcomm with that component’s performance and artificial intelligence features by that point. It will also build in support for next-generation satellite networks.
In the middle of this timeline — which, alas, isn’t written in a nice bulleted or ordered list like Axios, but in Bloomberg’s house style — Gurman slips in this very Bloomberg detail:
Qualcomm has long been preparing for Apple to switch away from its modems, but the company still receives more than 20% of its revenue from the iPhone maker, according to data compiled by Bloomberg. Its stock fell as much as 2% to a session low after Bloomberg News reported on Apple’s plans Friday. It closed at $159.51 in New York trading, down less than 1%.
I’ve attributed most of Intel’s post-2020 slump to the loss of Apple as a partner. People like to claim Apple wasn’t an important or large customer because the number of Mac units Apple sells each year pales in comparison to Intel’s other clients, but the number of end-user units is irrelevant. It’s undoubtedly true that Apple paid Intel lots of money and was one of its most important customers. Apple was always reliable: it wanted the latest Intel processors each year in Macs and wanted them quick. When Intel was behind or underwater, it could always have confidence that Apple would be a reliable, recurring source of income. In 2020, that changed, and now the company is doing so poorly that it fired Pat Gelsinger, its chief executive since 2021, as a vote of no confidence, per se.
It’s not wrong to argue that the primary reason for Intel’s latest downfall is that it never developed processors for smartphones, ceding that ground to Qualcomm and Apple, but I have a feeling Intel would’ve been fine if it still had Apple as a partner. It lamented the loss of Apple — sourly1 — because it realized how bad it was right then that it lost such a reliable buyer. Partners come and go all the time, but if Intel felt it wouldn’t hurt after Apple’s departure, it wouldn’t cook up attack ads featuring Jason Long, who famously played the Mac in Apple’s clever “Get a Mac” marketing campaign. That was a move born out of sheer desperation; Intel has been desperate since 2021.
Now, back to Qualcomm. Before this story, I was under the assumption that Qualcomm made the vast majority of its revenue from its mobile processor business — the popular Snapdragon chip line. That majorly composes Qualcomm’s business, but it isn’t the vast majority. Either way, I severely underestimated how much it would hurt Qualcomm to lose Apple as a partner. Qualcomm makes more than 20 percent of its total revenue from just one company, one trading partner. Because of that, I think I’m ready to make a rather bold prediction: 2026 will be to Qualcomm what 2020 was to Intel. Once Apple starts shipping its own modems in the standard and Pro-model iPhones, it’s game over for Qualcomm. Apple wasn’t Intel’s biggest customer, but it was strategically the most important, and I feel the same is true for Qualcomm.
But clearly, Apple believes building modems is much harder than designing Arm-based microprocessors, as evidenced by how long it’s taken Apple to build its own modems. Apple has been trying to compete with Qualcomm since the two companies got into a spat back in 2018 when a Chinese court ruled Apple infringed on Qualcomm’s patents. Whereas Intel and Apple have always historically been friends, the same can’t be said for Qualcomm — the two companies have been in fierce competition since that kerfuffle, and it’s going to come to a head in just a few months when Apple launches its first modem, ideally not even to much fanfare. If the next-generation iPhone SE is just as reliable as previous models, Apple has a winner, and Qualcomm will inevitably sweat.
To make matters even worse, Qualcomm is currently embroiled in a lawsuit with Arm, which licenses its designs to Qualcomm, which then modifies them and fabricates (makes) them with Taiwan Semiconductor Manufacturing Company. Arm has already canceled Qualcomm’s license to produce chips with Arm designs, and if it wins in court this month, that cancelation will be set in stone. The reaction to this problem has mostly been tame — tamer than I believe it should be — because the industry is sure that Arm is shooting itself in the foot by making enemies with arguably its most important customer, but this is bad for Qualcomm, too. It’ll probably switch over to using the RISC-V (pronounced “risk-five”) instruction set, but that’s a drastic change. Add this Apple deal to the mix, and the company is in deep trouble.
It’s possible Qualcomm weathers the impending storm better than Intel because it’s arguable that Qualcomm is in a much better position financially. Qualcomm chips aren’t behind — they’re competitive with the very best iPhone-grade Apple silicon, and they’re popular amongst flagship Android manufacturers. The same couldn’t be said for Intel back in 2020, which was slipping on its latest processors and had fierce competition from Advanced Micro Devices. But the relatively recent talk about Qualcomm potentially buying Intel seems almost nonsensical after Gurman’s Friday report, and the chip design market seems more volatile than it ever has in recent history.
Also from Gurman today:
Apple Inc.’s effort to build its own modem technology will set the stage for a range of new devices, starting with slimmer iPhones and potentially leading to cellular-connected Macs and headsets.
According to this report, Apple’s main concern for bringing cellular connectivity to the Mac is space, and that’s addressed with its own modems. Initially, this struck me as unbelievable since Mac laptops ought to have tons of room inside for a tiny modem that fits even in the Apple Watch, but perhaps an iPhone-caliber modem isn’t powerful enough to handle the networking requirements of a Mac? I’m really unsure, but a bit of me still believes it’s feasible to stuff a Qualcomm modem in a MacBook Pro, at least. In any event, I’m a fan of this development, even as someone who doesn’t use their Mac outside, in the wild, very often. When I do, however, I typically rely on iPhone tethering, and that’s just a mess of data caps and slow speeds. I’d love it if I could tack on a cheap addition to my existing iPhone cellular plan for a reasonable amount of data on my Mac each month.
I understand the appeal of a cellular-connected Apple Vision Pro less, but if it works, it works. Either way, Qualcomm is screwed since not only is it not receiving the mountain of reliable cash that comes with an iPhone deal, but it’s also not able to profit from Apple’s new cellular ventures.
The Browser Company Had Something Great — Then, They Blew It
Jess Weatherbed, reporting for The Verge:
The Browser Company CEO Josh Miller teased in October that it was launching a more AI-centric product, which a new video reveals is Dia, a web browser built to simplify everyday internet tasks using AI tools. It’s set to launch in early 2025.
According to the teaser, Dia has familiar AI-powered features like “write the next line,” — which fetches facts from the internet, as demonstrated by pulling in the original iPhone’s launch specs — “give me an idea,” and “summarize a tab.” It also understands the entire web browser window, allowing it to copy a list of Amazon links from open tabs and insert them into an email via written prompt directions.
“AI won’t exist as an app. Or a button,” a message on the Dia website reads. “We believe it’ll be an entirely new environment — built on top of a web browser.” It also directs visitors to a list of open job roles that The Browser Company is recruiting to fill.
The name “Dia” says most of what’s noteworthy here: The Browser Company’s next product isn’t a browser at all. It’s an agentic, large language model-powered experience that happens to load webpages on the side. Sure, it’s a Chromium shell, but the primary interaction isn’t meant to be clicking around on hypertext-rendered parts of the web — rather, The Browser Company envisions people asking the digital assistant to browse for them. It’s wacky, but The Browser Company has already been heading in this direction for months now, beginning with the mobile version of Arc, its flagship product. Now, it wants to ditch Arc, which served as a fundamental rethinking of how the web worked when it first launched a year ago.
The Browser Company’s whole pitch is that, for the most part, our lives depend on the web. That isn’t a fallacy — it’s true. Most people do their email, write their documents, read the news, and use social media all in the browser on their computer. While on mobile devices, the app mentality remains overwhelmingly popular and intuitive, the browser is the platform on the desktop. Readers of this website might disagree with that, but by and large, for most people, the web is computing. I don’t disagree with The Browser Company’s idea that the web needs to be thoroughly rethought, and I also think artificial intelligence should play a role in this rethinking.
ChatGPT, or perhaps LLM-powered robots entirely, shouldn’t be confined to a browser tab or even a Mac app — they should be intertwined with every other task one does on their computer. If this sounds like an operating system, that’s because The Browser Company thinks the web is basically its own OS, and it’s hard to argue with that conclusion. Most websites these days perfectly fit the definition of an “app,” so much so that some of the biggest desktop apps are just websites with fancy Electron wrappers. For a while, Arc had been building on this novel rethinking of the web, and while some have begrudged it, I mostly thought it was innovative. Arc’s Browse for Me feature, AI tab reorganization, and tab layouts on the desktop were novel, exciting, and beautiful. The Browser Company had something special — and that’s coming from someone who doesn’t typically use Chromium browsers.
Then, Miller, The Browser Company’s chief executive, completely pivoted. Arc would go into maintenance mode, and major security issues were found weeks later. It wasn’t good for the company, which once had a real thing going. I listened to his podcast to understand the team’s thought process and to get an idea of where Arc was headed, and I came to the conclusion that a much simpler version of Arc, perhaps juiced with AI, would come to market in a few months. The Browser Company had a problem: Arc was too innovative. So here’s what I envisioned: two products, one free and one paid, for different segments of the market. Arc would become paid and continue to revolutionize the web, whereas “Arc 2.0,” as Miller called it, would become the mass-market, easy-to-understand competitor to Chrome. It’s just what the browser market needed.
That vision was wrong.
Now, Arc and the stunningly clever ideas it brought are dead, replaced by a useless, flavorless ChatGPT wrapper. Take this striking example: Miller asked “Dia” to round up a list of Amazon links and send them in an email to his wife. The “intelligence” began its email with, “Hope you’re doing well.” Who speaks to their spouse like that? This isn’t a browser anymore — it’s AI slop. I understand the video and promotion The Browser Company published demonstrates a prototype, but writing emails isn’t the job of a browser. Search should be Dia’s main goal, and the ad didn’t even talk about it in any way that was enticing. Instead, it demonstrated AI doing things, something I never will trust a robot with. Booking reservations, creating calendar events, writing emails — sure, this is busy work, but it’s important busy work. Scrolling through Google’s 10 blue links is busy work that’s actually in need of abstraction.
This hard pivot from innovative ideas and designs to run-of-the-mill AI nonsense serves as a rude awakening that no start-up will ever succeed without ruining its product with AI in the process. Again, I don’t think it’s the AI’s fault — it’s just that there’s no vision other than venture capitalist money. A browser should stick to browsing the web well, and Dia isn’t a browser. There’s no place for a product like this.
What’s the Deal With the iPhone 17 Lineup?
Chance Miller, reporting for 9to5Mac on a semi-detailed leak from The Information about Apple’s rumored ultra-slim iPhone 17, supposedly coming next year:
A new report from The Information today once again highlights Apple’s work on an ultra-thin “iPhone 17 Air” set to launch next year. According to the report, iPhone 17 Air prototypes are between 5 and 6 millimeters thick, a dramatic reduction compared to the iPhone 16 at 7.8 mm…
The Information cites multiple sources who say that Apple engineers are “finding it hard to fit the battery and thermal materials into the device.” An earlier supply chain report also detailed Apple’s struggles with battery technology for the iPhone 17 Air…
Additionally, the report says that the iPhone 17 Air will only have a single earpiece speaker because of its ultra-thin design. Current iPhone models have a second speaker at the bottom.
My initial presumption months ago was that the device was just being misreported as an ultra-slim iPhone and is instead a vertically folding one, but that has no chance of being right this late into the rumor cycle. So this is an ultra-thin iPhone, and it looks like it’ll take the place of iPhone 16 Plus — which took iPhone 13 mini’s slot a year earlier. Apple seems to be having a hard time selling this mid-tier iPhone: both the iPhone mini and iPhone Plus are sales flops because most people buy the base-model iPhone or step up to an iPhone Pro or Pro Max. The only catch is the price: If rumors are to be believed, this will be the new most expensive iPhone model next year, which means it wouldn’t be the spiritual successor to the iPhone mini and iPhone Plus but a new class of iPhone entirely. That makes the proposition a lot more confusing.
The whole saga reminds me of an ill-fated Apple product: the 2015 MacBook, lovingly referred to as the MacBook Adorable. It cost more than the MacBook Air at the time yet was a considerably worse product: it only had an Intel M-series processor, one port for both data and charging, and it shipped with terrible battery life. The MacBook Adorable was a fundamentally flawed product, thermal throttling for even the most basic computing tasks, and it was discontinued years later. The MacBook Adorable was a proof of concept — a Jony Ive-ism — and not an actual computer, and I’m afraid Apple is going for Round 2 with this iPhone 17 Slim, or whatever it’s called. It’s more expensive than the base-model iPhone but is rumored to ship with no millimeter-wave 5G, one speaker, an inferior Apple-made modem, a lower-end processor, and only one camera. Even the base-model iPhone ships with two cameras: an ultra-wide and a main sensor.
Granted, if the iPhone Slim costs $900, we’d have a marginally different story. It still wouldn’t be good to sell a worse phone for more money, but it’d make sense. The iPhone Slim would be an offering within the low-end iPhone class, separate from the Pro models, almost like the Apple Watch Ultra, which is updated less frequently than the regular Apple Watch models and thus is worse in some aspects, yet nevertheless is more expensive. But pricing it above the Pro Max while offering significantly fewer features just doesn’t jibe well with the rest of the iPhone lineup, which currently, I think, is no less than perfect. Think about it: Right now, customers can choose between two price points and two screen sizes. It’s a perfect, Steve Jobs-esque 2-by-2 grid: cheap little, cheap big, expensive little, and expensive big. Throw in the iPhone SE and some older models at discounted prices, and the iPhone lineup is the simplest and best it can be.
But throw the iPhone Slim into the mix, and suddenly, it gets more convoluted. If it’s priced at $900 — what iPhone 16 Plus costs now — then it’d make more sense to save $100 and get a better device. In other words, it slots into the current lineup imperfectly, and nobody will buy it. Conversely, if it’s situated above the Pro phones, say at $1,200, it becomes an entirely new class of its own, separate from the base-model iPhones — a class nobody wants because it’s inferior to every other iPhone model. The only selling point of this iPhone Slim is how thin it is — and really, 5 to 6 millimeters is thin. But is being thin seriously a selling point? If being small and being cheap and large weren’t selling points for the mid-range iPhone, I don’t see how being thin yet more expensive is one, either. The whole proposition of the phone makes no sense to me, especially after seeing the hard fall of the MacBook Adorable. Part of my brain still wants to think this is some sort of foldable iPhone — either that or it’s some permutation of the iPhone SE.1
Also peculiar from this report, Wayne Ma and Qianer Liu:
Apple’s other iPhone models will also undergo significant design changes next year. For instance, they’ll all switch to aluminum frames from stainless steel and titanium, one of the people said.
The back of the Pro and Pro Max models will feature a new part-aluminum, part-glass design. The top of the back will comprise a larger rectangular camera bump made of aluminum rather than traditional 3D glass. The bottom half will remain glass to accommodate wireless charging, two people said.
The Information is a reliable source with a proven track record; when AppleTrack was a website, it had The Information at a whopping 100 percent rumor accuracy. Yet I find this rumor incredibly hard to believe. Apple has shipped premium materials — either stainless steel or titanium — on the expensive models since the iPhone X to separate them from the base-model iPhones. The basic design of the iPhone — to the chagrin of some people — has remained unchanged since the iPhone X: an all-glass back with premium metallic sides. Now, the two reporters say next year’s iPhone will be “part aluminum, part glass,” using a description that’s weirdly reminiscent of the Pixel 9 Pro. Why would Apple make a hard cut from aluminum to glass? And why would it even be aluminum in the first place when one of Apple’s main Pro iPhone selling points is its “pro design?” It doesn’t even make a modicum of sense to me how this design would look. A split metal-glass back is uncanny and nothing like what Apple would make. For now, I’m chalking this up to a weird prototype that’s never meant to see the light of day.
-
I haven’t written about the next-generation iPhone SE much, mostly because I don’t have much to write home about, but I think it’ll be a good phone, even with a price bump. It’ll compete well with the Pixel 9a and Nothing Phone (2). I don’t think it needs the Dynamic Island or even an ultra-wide camera for anything under $500, so long as it uses the A18 processor and ships with premium materials. The iPhone 14’s design isn’t that long in the tooth either. ↩︎
Gurman: LLM-Powered Siri Slated for April 2026 Release
Mark Gurman, reporting for Bloomberg:
Apple Inc. is racing to develop a more conversational version of its Siri digital assistant, aiming to catch up with OpenAI’s ChatGPT and other voice services, according to people with knowledge of the matter.
The new Siri, details of which haven’t been reported, uses more advanced large language models, or LLMs, to allow for back-and-forth conversations, said the people, who asked not to be identified because the effort hasn’t been announced. The system also can handle more sophisticated requests in a quicker fashion, they said…
The new voice assistant, which will eventually be added to Apple Intelligence, is dubbed “LLM Siri” by those working on it. LLMs — a building block of generative AI — gorge on massive amounts of data in order to identify patterns and answer questions.
Apple has been testing the upgraded software on iPhones, iPads, and Macs as a separate app, but the technology will ultimately replace the Siri interface that users rely on today. The company is planning to announce the overhaul as soon as 2025 as part of the upcoming iOS 19 and macOS 16 software updates, which are internally named Luck and Cheer, the people said.
To summarize this report, Siri will be able to do what ChatGPT had in fall 2023 — a conversational, LLM-powered voice experience. People, including me, initially compared it to ChatGPT’s launch in November 2022, but that isn’t an apples-to-apples comparison since ChatGPT didn’t ship with a voice mode until a year later. Either way, Apple is effectively two and a half years late, and when this conversational Siri ships, presumably as part of next year’s Apple Intelligence updates, ChatGPT 5 will probably be old news. ChatGPT’s voice mode, right now, can search the internet and deliver responses in near real-time, and I’ve been using it for all my general knowledge questions. It’s even easy to access with a shortcut — how I do it — or a Lock Screen or Control Center control.
Meanwhile, the beta version of Siri that relies on ChatGPT is also competitive, although it’s harder to use because most of the time, Siri tries to answer by itself (requiring queries to be prefaced with “Ask ChatGPT,” which, at that point, it’d be a better use of time to tap one button to launch ChatGPT’s own app), and the ChatGPT feature isn’t conversational. The other day, I asked, “Where is DeepSeek from?” and Siri answered the question by itself. I then followed up with, “Who is it made by?” and Siri went to ChatGPT for an answer but came back with, “I don’t know what you’re referring to by ‘it.’ Could you provide the name of the product or service you’re wondering about?” Clearly, the iOS 18.2 version of Siri is way too confident in its own answers and also doesn’t know how to prompt ChatGPT effectively. The best voice assistant on the iPhone is the ChatGPT voice mode via a shortcut or Lock Screen control.
Personally, I think Apple should just stop building conversational LLMs of its own. It’s never going to be good at them, as evidenced by the fact that Siri’s ChatGPT integration is so haphazard that it can’t even ask basic questions. A few weeks ago, when Vice President Kamala Harris was scheduled to be on “Saturday Night Live,” I asked Siri when it begins. Siri responded by telling me when “SNL” first began airing: October 11, 1975. I had to rephrase my question, “Ask ChatGPT when ‘SNL’ is on tonight,” and then only it used ChatGPT to give me a real-time answer, including sources at the bottom. Other times, Siri was good at handing off queries to ChatGPT, but it really should be much more liberal — I should never have to prefix “Ask ChatGPT” to any of my questions. The point is, if Apple really wanted to build a conversational version of Siri, it could use its (free) partner, ChatGPT, or even work with it to build a custom version of GPT-4o just for Siri. OpenAI is eager to make money, and Apple could easily build a competitive version of Siri by the end of the year with the tools it’s shipping in the iOS beta right now.
I’ll say it now, and if it ages poorly, so be it: Apple’s LLMs will never be half as good as even the worst offerings from Google or OpenAI. What I’ve learned from using Apple Intelligence over the past few months is that Apple is not a talented machine learning company. It’s barely adequate. Apple Intelligence notification summaries are genuinely terrible at reading tone and understanding the nuances in human communication — it makes for funny social media posts, but it’s just not that useful. I now have them turned off for most apps since I don’t trust them to summarize news alerts or weather notifications — they’re really only useful for email and text messages. And about that: I read most of my email in Mimestream, which can’t take advantage of Apple Intelligence even if it wanted to because there aren’t any open application programming interfaces for developers to use to bring Apple Intelligence to their apps. Visual Intelligence is lackluster, Writing Tools are less advanced than ChatGPT and aren’t available in many apps on the Mac, and don’t even get me started on Genmoji, which is almost too kneecapped to do anything useful.
Apple Intelligence, for now, is a failure. That could change come spring 2025 when Apple is rumored to complete the rollout, but who knows how ChatGPT will improve in the next six months. It isn’t just that April 2026 is too late for an LLM-powered Siri, but that it won’t be any good. Apple doesn’t have a proven track record in artificial intelligence, and it’s struggling to build one.