The Google Graveyard Expands

Google, writing in an email to me Monday morning:

Hi Eshu,

Thank you for being a Google One member where you enjoy extra storage, family sharing, and more. We’re writing to let you know about some updates coming to your Google One subscription starting on May 15. These changes are designed to streamline your benefits while ensuring you have a valuable subscription experience…

Phasing out two benefits: With a focus on providing the most in-demand features and benefits, we’re discontinuing free shipping for select print orders from Google Photos (in Canada, the UK, US, and EU) starting on May 15 and VPN by Google One later this year.

My subscription price is still the same, so I guess it’s time to add another product to the Google Graveyard. Speaking of Google One and the Google Graveyard, here is Abner Li, reporting for 9to5Google:

Google is now “discontinuing the VPN feature as [they] found people simply weren’t using it.” The company tells 9to5Google that the deprecation will let the team “refocus” and “support more in-demand features with Google One.”

Earlier this year, Google One hit 100 million subscribers and CEO Sundar Pichai teased it as a future growth area driven by AI. Today’s change follows this week’s news about AI editing tools in Google Photos going free in the coming months and no longer requiring a subscription save for unlimited Magic Editor usage.

Google One included a virtual private network service, which the company launched in 2020 for some odd reason, but now that’s gone too. And again, users’ subscription prices are staying the same. Anyone who subscribed to Google One, especially to a yearly plan, is now just getting prematurely cut off because Google doesn’t care about supporting products.

Again, add it to the Google Graveyard.

Paywalls Are Unnecessary

Richard Stengel, writing for The Atlantic:

How many times has it happened? You’re on your computer, searching for a particular article, a hard-to-find fact, or a story you vaguely remember, and just when you seem to have discovered the exact right thing, a paywall descends. “$1 for Six Months.” “Save 40% on Year 1.” “Here’s Your Premium Digital Offer.” “Already a subscriber?” Hmm, no.

Now you’re faced with that old dilemma: to pay or not to pay. (Yes, you may face this very dilemma reading this story in The Atlantic.) And it’s not even that simple. It’s a monthly or yearly subscription—“Cancel at any time.” Is this article or story or fact important enough for you to pay?

Or do you tell yourself—as the overwhelming number of people do—that you’ll just keep searching and see if you can find it somewhere else for free?

According to the Reuters Institute for the Study of Journalism, more than 75 percent of America’s leading newspapers, magazines, and journals are behind online paywalls. And how do American news consumers react to that? Almost 80 percent of Americans steer around those paywalls and seek out a free option.

Paywalls create a two-tiered system: credible, fact-based information for people who are willing to pay for it, and murkier, less-reliable information for everyone else. Simply put, paywalls get in the way of informing the public, which is the mission of journalism. And they get in the way of the public being informed, which is the foundation of democracy. It is a terrible time for the press to be failing at reaching people, during an election in which democracy is on the line. There’s a simple, temporary solution: Publications should suspend their paywalls for all 2024 election coverage and all information that is beneficial to voters. Democracy does not die in darkness—it dies behind paywalls.

I’d go one step further than Stengel did: I think paywalls entirely are mostly unnecessary for regular news websites. There should be an exception for small firms or publications that mainly focus on analytical coverage or opinion pieces, such as The New Yorker or The Atlantic, even though there is irony in an article about paywalls being hidden behind a paywall, as Stengel rightly points out. The reason I say this is because analysis isn’t information per se in the same way regular, hard news and reporting is. The people who are interested in analysis or a columnist’s views on a topic are much more likely to pay for that information — it’s just news that needs to be balanced and free.

Paywalls are a relic from a time when newspapers were bought from newsstands daily and read in coffee shops across the country. Every day, people would buy a newspaper for a quarter, read it, then leave it for the next person to read — and so on, every day. Newspaper publishers saw an opportunity to sell subscriptions to these papers to frequent readers, who perhaps could save some money by buying the newspaper at a discounted monthly or yearly rate as opposed to buying a new paper every day. When the online era took over and as people bought fewer physical newspapers, publishing houses moved to sell those subscriptions online.

As inflation hiked, however, newspaper subscriptions became more expensive. Now, The New York Times costs $25 a month because the physical copy costs $4 per paper ($4 a day for 30 days is $120 a month, so the digital version is still cheaper, obviously). In parallel, news became more affordable due to the advent of online advertising and free journalism, causing many to drop their subscriptions and just consume free news published by digital-only websites, like The Verge or NBC News. The result of this schism is that there is a divide between free and paid journalism, as well as the quality of information that emanates from each source.

This is the core of Stengel’s piece: The online media landscape gives credence to clickbait because it spreads so quickly due to it being free. Free articles metastasize through the internet so rapidly because everyone can read them. But this abundance of free, semi-factual journalism is spreading and warping the public’s perception of the news media because good journalism just isn’t being sold and marketed properly. These online media companies often make more money than the print publishers that sell subscriptions because of how quickly the information spreads on social media. Why aren’t print publishers picking up on this?

I firmly believe more websites should drop the paywall entirely and instead opt to sell effective, non-intrusive advertising, which can often be lucrative. Newspapers themselves are already fantastic examples of how lucrative advertising can be, as they sell full-page ads in innovative formats, which advertising companies are itching to cash in on. This is part of the game of journalism in 2024 — if we want the media to be a reliable arbiter of information without relegating the ever-important job of reporting to citizen journalists on social media websites like X or Threads, the corporate overlords who control the media should get better at making it profitable.

The news industry is at a crossroads, with the hastened development of generative artificial intelligence in newsrooms and the increased popularity of foreign-owned video websites like TikTok, where more Americans are getting their information from than ever before. At a critical time like this, journalism should become more accessible, innovative, and forward-facing — and removing paywalls is a key step in that direction.

The Humane Ai Pin is a Disaster

I have written about the Humane Ai Pin twice before, once when the product was first announced in November, and the other in March, when the company released a video walkthrough of the device’s features. Now, reviews are in, both on YouTube and the web, and they’re scathing. The Ai Pin is — and I cannot stress this enough — utter garbage, and that isn’t me jesting. To support this claim, here are some quotes from reviewers who have had time with the pin:

“It’s a nightmare.” — Arun Maini, Mrwhosetheboss

“It’s just frustrating.” — Michael Fisher, MrMobile

“It’s futuristic if the future sucked.” It “solves nothing and makes me feel stupid.” — Cherlynn Low, Engadget

“It just doesn’t work.” — David Pierce, The Verge

That is just a small snippet of criticism this device has received in just one day of reviews. I agree with these reviewers — this product lacks conviction, lacks a path to success, and doesn’t even do what it is advertised to do. As one commenter on YouTube put it, it feels like a late April Fools’ Day joke. It costs as much as a mid-range smartphone at $700, requires a mandatory $24 monthly subscription to function as anything more than a paperweight1, and relies on the whims of artificial intelligence to do literally anything. And the times when it does do something, it does it wrongly, misidentifying landmarks, such as in Pierce’s review, or making up information, as shown in Fisher’s video.

AI software can be refined and tweaked over time2. What can’t, however, is the very design premise of this gadget. Its primary method of interaction is a loudspeaker that is bound to annoy everyone around, and connecting to a Bluetooth headset requires interacting with a flawed, frustrating laser projector. Reviewers have described the projector as annoying, hard to use, and simply impossible to view while in daylight. Interacting with content requires learning unintuitive gestures and flailing movements of the arms. Due to this oversight, which I picked up on in November, interacting with this menace of a lapel pin causes arm strain and annoyance.

Also, the battery dies quite frequently, making Humane’s “Perpetual Power System,” i.e., extra batteries, essential to use the product for anything longer than a few hours, according to early tests. And if you do use it for extended periods, it’ll overheat, as Fisher demonstrated during a call with his mother and as Pierce encountered while using the laser projector. These occurrences show that the product is impossible to use most of the time, and is useless when it is possible to use. The onboard camera, used for first-person shots, is low-quality and lackluster, though it is neat in a pinch, but no reasonable nor sane person would say that it is worth $700 and $24 a month.

Humane’s founders, Bethany Bongiorno and Imran Chaudhuri, have marketed the Ai Pin as a smartphone companion at the very least, and at times have even come out as more confident after the flopped November launch, calling the product a smartphone replacement at a time of increased distraction. To back up this moot point — people love their phones — Bongiorno has been reposting accounts citing a book with debatable sourcing that claims smartphones are the sole cause of childhood and adolescent depression, when in fact, the rises in these numbers are correlated with the uptake in smartphone use, not caused by it. When confronted with her earlier claims that she marketed the Ai Pin as a replacement for the phone, she flat-out denied it.

Nobody should take these claims seriously because this entire project feels like a scheme for investor money. When the original plan for a more ambitious product failed, Humane pivoted to AI large language models and built this device in four months after the release of ChatGPT in October 2022. Trust the reviewers because they’re experts — and the experts say that the Humane Ai Pin is a worthless piece of garbage. So do I.


  1. Canceling a subscription after purchasing the hardware for $700 will render the Humane Ai Pin entirely useless. It will cease to function entirely without perpetually paying Humane $24 a month. That also means if Humane ever goes out of business, customers will be left with boxes of metal and plastic that cost them $700. ↩︎

  2. “Never ever buy a product based on the promise of future software updates.” — Marques Brownlee, MKBHD ↩︎

Starlink Terminals Caught Being Smuggled Into Russia

Thomas Grove, Nicholas Bariyo, Micah Maidenberg, Emma Scott, and Ian Lovett, reporting for The Wall Street Journal:

A salesman at Moscow-based online retailer shopozz.ru has supplemented his usual business of peddling vacuum cleaners and dashboard phone mounts by selling dozens of Starlink internet terminals that wound up with Russians on the front lines in Ukraine.

Although Russia has banned the use of Starlink, the satellite-internet service developed by Elon Musk’s SpaceX, middlemen have proliferated in recent months to buy the user terminals and ship them to Russian forces. That has eroded a battlefield advantage once enjoyed by Ukrainian forces, which also rely on the cutting-edge devices.

The Moscow salesman, who in an interview identified himself only as Oleg, said that most of his orders came from “the new territories”—a reference to Russian-occupied parts of Ukraine—or were “for use by the military.” He said volunteers delivered the equipment to Russian soldiers in Ukraine.

On battlefields from Ukraine to Sudan, Starlink provides immediate and largely secure access to the internet. Besides solving the age-old problem of effective communications between troops and their commanders, Starlink provides a way to control drones and other advanced technologies that have become a critical part of modern warfare.

The proliferation of the easy-to-activate hardware has thrust SpaceX into the messy geopolitics of war. The company has the ability to limit Starlink access by “geofencing,” making the service unavailable in specific countries and locations, as well as through the power to deactivate individual devices.

Russia and China don’t allow the use of Starlink technology because it could undermine state control of information, and due to general suspicions of U.S. technology. Musk has said on X that to the best of his knowledge, no terminals had been sold directly or indirectly to Russia, and that the terminals wouldn’t work inside Russia.

The Wall Street Journal tracked Starlink sales on numerous Russian online retail platforms, including some that link to U.S. sellers on eBay. It also interviewed Russian and Sudanese middlemen and resellers, and followed Russian volunteer groups that deliver SpaceX hardware to the front line.

Anyone who seriously believes Musk doesn’t have the ability to properly restrict these terminals to specific airspace via geofencing is genuinely stupid, and probably a Russian asset. Here is how Russians get access to Starlink, even though President Vladimir Putin’s propaganda would lead you to believe Starlink terminals aren’t authorized for use in Russia: First, smugglers buy Starlink hardware in Middle Eastern countries, like the United Arab Emirates, for example, then activate those terminals for use anywhere in the world — a subscription SpaceX, the company that makes Starlink, sells. Then, those smugglers market the terminals on sites like eBay so that Russians can have the hardware shipped to nearby, Kremlin-friendly countries, sold at a markup. Then, smugglers bring the terminals and receivers over the border as if they were drugs or any other illegal items. “Patriotic” Russians then wheel them over to the front lines, where idiotic Russian soldiers are so stupid that they don’t even camouflage the bright white plastic terminals.

Ukraine, which is currently fighting a brutal war with Russia, also has access to Starlink, provided due to SpaceX’s contractual obligations with the U.S. Defense Department, which requires SpaceX to deploy Starlink hardware to U.S. allies in need, like Taiwan and Ukraine. Granted, Musk neutered Ukraine’s access to these important terminals, which provide internet access in Russian-occupied areas, when needed the most, but the service still remains available there in some capacity due to U.S. contracts. Bloomberg reported Wednesday that the Defense Department pays $23 million for this deal, but the U.S. official who leaked the information to Bloomberg declined to say whether the United States would renew it with Musk or not. This type of smuggling would be concerning to the United States due to these deals, so, in March, House Democrats sent a letter to SpaceX over the illegal import of Starlink terminals by Russia, which aid Putin in controlling drones and commanding troops. Apparently, nothing came of that letter, and just a month later, The Journal reported on the continued illegal use of Starlink terminals in Russia.

Again, only a fool would believe Musk and his company have no clue about the illegal use of these products in Russia — smart people work for SpaceX and upwards of 5,000 satellites roam Earth tracking the position of the terminals in real-time, according to The Journal’s report. Musk has the tools at his disposal to halt Russia’s unlawful use of terminals to destroy civilian buildings in Ukraine and illegally occupy sovereign territory, but he never will, because he himself is a Russian asset parroting propaganda straight from Moscow on X, his social media website. However many pro-Kremlin Republican puppets there are in Congress, however, the United States should exercise its leverage and contracts with SpaceX to force Musk’s firm to comply with U.S. law and disable enemy use of Starlink satellites. The United States has given Musk a free pass on contracts for too long — if it wants to continue doing business with the world’s richest Russian propagandist, it needs to shove him down on his knees and make him beg for the money he wants.

Google Docs Are (Mostly) Safe from AI Scraping

Katie Notopoulos, reporting for Business Insider regarding Google possibly scraping Google Docs for use as training data for Gemini, Google’s artificial intelligence chatbot:

A representative for Google confirmed to Business Insider that simply changing the share settings to “anyone with the link” did not mean that a document was “public” and would be used for AI training.

To be “publicly available,” that document would need to be posted on a website or shared on social media. Basically, some kind of web crawler would need to be able to find it. That can’t happen with a file just emailed back and forth between two people — like if you send your friend a link over Gmail, for instance, Google said.

Breathe a sigh of relief.

I still think this is mildly disingenuous since there is no way to opt out of AI training, unlike on a traditional website, where disallowing AI crawlers to index the page is as simple as adding a line to the website’s robots.txt file. And, not to mention, there is no clarification in either Google Docs’ privacy policy or terms of use on how Google might use your documents to train its AI models. I hope Google makes improvements and clarifies its policy in the near future. But for now, unless you’re publicly publishing a Google Doc to the web, there is no need to worry.

Lawmakers Draft Data Privacy Regulation

Cristiano Lima-Strong, reporting for The Washington Post:

Key federal lawmakers Sunday unveiled a sweeping proposal that would for the first time give consumers broad rights to control how tech companies like Google, Meta, and TikTok use their personal data, a major breakthrough in the decades-long fight to adopt national online privacy protections. The bipartisan agreement, struck by Senate Commerce Committee Chair Maria Cantwell (D-Wash.) and House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.), marks a milestone in the congressional debate over data privacy. The issue has befuddled lawmakers despite near-universal agreement — in Silicon Valley and in Washington — on the need for federal standards to determine how much information companies can collect from consumers online.

The measure, a copy of which was reviewed by The Washington Post, would set a national baseline for how a broad swath of companies can collect, use, and transfer data on the internet. Dubbed the American Privacy Rights Act, it also would give users the right to opt out of certain data practices, including targeted advertising. And it would require companies to gather only as much information as they need to offer specific products to consumers, while giving people the ability to access and delete their data and transport it between digital services.

Significantly, the deal — one of Washington’s most significant efforts to catch up to privacy protections adopted in Europe nearly a decade ago — would resolve two issues that have bogged down negotiations for years: whether a federal law should override related state laws and whether consumers should be permitted to sue companies that violate the rules.

Europe’s General Data Protection Regulation, commonly known as GDPR, aimed to achieve exactly what Cantrell and McMorris Rodgers’ bill now aims to do in the United States, but GDPR just makes users’ experience on the internet worse. The U.S. bill, which has not been fully written, would give users the right to request companies delete their user data — a crucial measure for data privacy in 2024. If you’re European, that might seem like common sense, but in the United States, consumers can only ask companies to delete their data, not force them to. This bill would change that.

Otherwise, the legislation is light on details, though I assume that will change as it gets written. It would allow users to opt out of targeted advertising, which is a potential cause for concern, although I imagine there will be a carveout for paid, advertisement-free subscriptions like the one Meta sells in Europe to comply with the Digital Markets Act and that is currently being challenged by the European Commission for some nonsense reason. And the mandate about restricting companies to only necessary data collection is essential to keep data brokers in check — brokers that collect massive amounts of data and sell it however they would like without any oversight or consumer consent.

Data transportation regulation is also important, though it concerns me how the measure will be written in this regard. Consumers should have the right to request copies of their data in easily accessible formats — not proprietary ones like the businesses that allow consumers to receive their data at all usually supply — and import that data in the finance and technology sectors. This regulation, however, should only apply to large corporations, as it may hinder innovation amongst smaller ones that need the data advantage the larger companies currently possess. (The new act does change rules depending on how large a company is, calculated via annual income.)

Color me skeptical, since data privacy regulation and monopoly checks are challenging pieces of legislation for a dysfunctional Congress to pass, but I think this new bill looks promising. Granted, the longer a bill is in committee, the worse it will become since most senators really have no grasp on technology. That combined with the looming election in November when Democrats might lose the Senate, and I am unsure if this bill will ever get to the House and be signed by the president. But it goes without saying that there is a crucial need for good, knowledgeable data privacy regulation at the federal level, bypassing a patchwork of poorly written state legislation that makes companies’ lives more difficult and confuses unwitting consumers.

Relatedly, Maryland passed similar data privacy regulations for its consumers Sunday, as well.

OpenAI, Google Trained AI Models on YouTube Videos and Google Docs

Cade Metz, Cecilia Kang, Sheera Frenkel, Stuart Thompson, and Nico Grant, reporting for The New York Times:

In late 2021, OpenAI faced a supply problem.

The artificial intelligence lab had exhausted every reservoir of reputable English-language text on the internet as it developed its latest A.I. system. It needed more data to train the next version of its technology — lots more.

So OpenAI researchers created a speech recognition tool called Whisper. It could transcribe the audio from YouTube videos, yielding new conversational text that would make an A.I. system smarter.

Some OpenAI employees discussed how such a move might go against YouTube’s rules, three people with knowledge of the conversations said. YouTube, which is owned by Google, prohibits use of its videos for applications that are “independent” of the video platform.

Ultimately, an OpenAI team transcribed more than one million hours of YouTube videos, the people said. The team included Greg Brockman, OpenAI’s president, who personally helped collect the videos, two of the people said. The texts were then fed into a system called GPT-4, which was widely considered one of the world’s most powerful A.I. models and was the basis of the latest version of the ChatGPT chatbot… Like OpenAI, Google transcribed YouTube videos to harvest text for its A.I. models, five people with knowledge of the company’s practices said. That potentially violated the copyrights to the videos, which belong to their creators.

Last year, Google also broadened its terms of service. One motivation for the change, according to members of the company’s privacy team and an internal message viewed by The Times, was to allow Google to be able to tap publicly available Google Docs, restaurant reviews on Google Maps, and other online material for more of its A.I. products.

As I’ve said many times previously, I do not think scraping content — even non-consensually — from the web to train AI models is illegal, since I think AI large language models are transformative. Granted, if an LLM reproduces text one-for-one, that is a concern because it is not fair use according to U.S. copyright law. But transformative use of copyrighted works is permitted under the law for a good reason. The best way to solve the kerfuffle between publishers, authors, and other creators and the AI companies hungry for their data is via comprehensive regulation written by experts — but knowing Congress, that will never happen. The current law is the law we will always have, and while it is not sufficient to address this new era of copyrighted works, it’s what we’re stuck with.

With that said, I am not necessarily upset at OpenAI for scraping public YouTube videos using Whisper to train GPT-4 — GPT-4 is not quoting YouTube videos verbatim and provides helpful information, which is more than enough to qualify as “fair use.” What I do have a problem with is Google’s implementation — in its reaction to OpenAI’s scraping, its own scraping, and its use of private Google Docs data. Google is the owner of YouTube, and YouTube users sign a contract with Google in order to use the service: the terms of service. So, due to this relationship between YouTube users and Google, Google has a responsibility to inform its users about how it’s using their data in the form of a privacy policy.

Unlike the terms of use, YouTube’s privacy policy says nothing about how Google can use YouTube videos to train Bard and its other LLMs. (I’ll get to Google Docs momentarily.) This creates two issues: (a) it makes it fair game for any person or company to scrape YouTube content since Google did not provide YouTube users with an explicit guarantee that it would be the only company that scrapes their videos (or that their videos wouldn’t be scraped at all), and (b) it, unbeknownst to YouTube’s users, compromises their data without their knowledge. Neither of these issues put Google in a legally compromising scenario due to fair use, but it is not a good look for Google.

Looks matter for a company like Google, which must create the illusion of privacy for users to feel confident giving the company so much data. Unlike Meta, people use Google services for a variety of private matters, sharing personal family videos on YouTube and writing sensitive notes in Google Docs. On Meta services, every user has the expectation — aside from on services like WhatsApp and Messenger — that their data will be shared with the world, available for anyone to view or use however they would like. Google promises privacy and security, and, for the most part, has delivered on that promise — but it can’t continue selling users on privacy when its actions directly contradict that pitch.

And about OpenAI: YouTube likes to say that OpenAI’s scraping is against its terms of use, which any person who uses YouTube — including OpenAI’s employees who scraped the data — has implicitly agreed to, but YouTube doesn’t have the right to enforce that specific rule in the terms of use because the same usage terms also give YouTube creators the ownership rights to the content they publish on YouTube. It cannot be against the terms of service for creators to do what they want with their content; what if a creator wants OpenAI to have access to their videos? YouTube cannot meaningfully enforce this rule, and even if it wanted to, the argument would be unstable because YouTube (Google) does the same thing even though it has not given itself rights via the terms of service it claims OpenAI has broken.

And then, there is Google Docs. Unlike the issue of YouTube, this one is legally concerning. Google claims that it only trains on data from users who opt into “experimental features,” which is to say, the features that allow users to use Bard to help them write documents. That part of the agreement is well advertised, but the part where Google grants itself the ability to access private user data to train AI models is implicitly stated. Google does not tell users to sign a new service agreement to use Bard in Google Docs — it just includes in the main terms of service that if a user were to sign up for experimental features, their data may be used for training purposes. That is sleazy.

It might not be illegal, but, as I said earlier, immorality is harmful enough. This creates one more unnecessary problem for Google in the form of a question: How is Google gaining access to private Google Docs data? Most users are under the assumption that what they write in Google Docs/Sheets/Slides etc., is for their eyes only — that it is private data that is most likely encrypted at rest. If Google can mine and use it however it wants, it’s just being stored as plain text somewhere. LLMs do not need encrypted data because it is illegible, so Google is either decrypting Google Drive data for users who have opted in or is storing everyone’s files in some unencrypted format.

Whatever the case is, it is deeply concerning because it breaks trust. What happens to all of the people who no longer use Google Docs but have signed the terms of use that permit the usage of their old data that was written before the new agreement? Millions — hundreds of millions — of people are unwittingly sending their data straight to Bard’s language bank. The usage of the data itself may not be illegal, but the collection of it is immoral and might be a breach of the “contract” between Google and its users. I’m not particularly concerned about my data being used to train an LLM as long as that data is anonymized and obfuscated — and I think many people are the same — but it is wrong for Google to harvest this data and use it in ways users are unaware of.

Obviously, the best way to solve this problem is for Google to stop collecting Google Docs data — and perhaps YouTube data, though that is less pressing because it’s public, unlike private documents — or amend its privacy policy to include third parties like OpenAI in the mix, but all of that ignores a larger question: Where will training data for LLMs come from? Reputable websites such as The Times have blocked ChatGPT’s crawlers from ingesting their articles to use as training data, and eventually, these robots will run out of the internet to train on. That poses a large problem for LLMs, which require training data to function entirely.

The solution proposed by some is to prompt LLMs to create data for themselves, but immediately, anyone who knows how transformer models work will know that it will lead to heavily biased, inaccurate data. LLMs are not perfect now, and if they are trained on imperfect data, they will just become more imperfect, repeating the cycle. The only plausible solution I find to this is to make LLMs more efficient. Currently, AI companies are relying on the findings of research from 2020 — that research said, plainly, that the more data a model is fed, the more accurate it will be. But transformer models have improved since then, to the point where they can even correct themselves using data from the web to prevent “hallucinations,” a phenomenon where a chatbot creates information that doesn’t exist or is wrong.

I predict that in the next few years, researchers will stumble upon a breakthrough: LLMs will be able to do their weighting and prediction without as much data, using the web to fact-check their findings. I’m not a scientist, but this industry is booming right now, and new ideas will come to the table soon. But for 2024, perhaps AI firms should look elsewhere than private user data to train their models.

Gurman: Apple Exploring Home Robotics

Mark Gurman, reporting for Bloomberg:

Apple Inc. has teams investigating a push into personal robotics, a field with the potential to become one of the company’s ever-shifting “next big things,” according to people familiar with the situation.

Engineers at Apple have been exploring a mobile robot that can follow users around their homes, said the people, who asked not to be identified because the skunk-works project is private. The iPhone maker also has developed an advanced table-top home device that uses robotics to move a display around, they said.

Though the effort is still in the beginning stages — and it’s unclear if the products will ultimately be released — Apple is under growing pressure to find new sources of revenue. It scrapped an electric vehicle project in February, and a push into mixed-reality goggles is expected to take years to become a major moneymaker.

I heavily doubt this is anything moronic like Tesla’s humanoid robot, which was actually just a man in a jumpsuit, but rather a product akin to Amazon’s Astro robot or a robot vacuum. Later in the article, Gurman writes that the company was exploring a product that could wash dishes in a sink, but I think a product like that would be too audacious and wouldn’t fit in with Apple’s existing products.

Apple manufactures personal computers, whether they be strapped to the face, on a desk, strapped to the arm, or in a pocket. A robot vacuum is a personal computer, but a humanoid contraption isn’t because the graphical user interface isn’t as important there — dexterity and motion are. Just as Apple doesn’t manufacture its own production equipment that it (Foxconn) uses to make its iPhones, it also won’t make a fully functioning robot straight out of a science-fiction film.

I’d be interested to see what Apple makes in this product category, but I’m also skeptical since these products haven’t been blockbuster hits before. Astro, for example, doesn’t have a killer use case since there aren’t that many things that a floor robot with wheels can do. I think Apple should spend less time engineering these experimental technologies and should instead focus on making its existing products more advanced, moving manufacturing away from China, and improving pitfalls Apple Vision Pro has.

These products, according to Gurman’s reporting, are aimed at the home only, which makes sense coming from a company like Apple. Instead of whatever Gurman outlines in his reporting, however, I think the way to approach this problem is to partner with existing home robotics companies, like iRobot, which manufactures the Roomba robot vacuum, and develop software that can integrate with HomeKit, Apple’s smart home platform. Imagine CarPlay but for other smart home products; they can run their own software made by their original manufacturers, but Apple can also contribute if users would like it to.

Apple makes software for its major hardware products, like the iPhone and the Mac, but these robots wouldn’t be their own stand-alone products — they would most likely need an iPhone or other Apple product to connect to and be controlled by. So in a way, it makes sense: Apple should work with third-party makers to make their products work better for Apple users, which is exactly how HomeKit operates now. That way, if the effort fails — as HomeKit has in many ways — it’s not on Apple to craft a remedy in the way it would be if Apple announced a robot of its own.

When Apple makes something, expectations are high, but if it works with another company, it isn’t much of a deal if the partnership fails. It will make headlines, but it will not be a failure for the company. Partnering with another firm may give Apple less to tout, but it also puts less pressure on the it to innovate, something that is sorely needed after the relative bust of Apple Vision Pro, which received a grand total of two weeks of media coverage.

‘The Google Cycle’

David Pierce, writing for The Verge:

Google Podcasts is dead. It has been dying for months, since Google announced last fall that it was killing its dedicated podcast app in order to focus all its podcasting efforts on YouTube Music. This is a bad idea and a big downgrade, and I’d be more mad if only I were more surprised.

The Podcasts app is just the latest product to go through a process I’ve come to call The Google Cycle. It always goes the same way: the company launches a new service with grandiose language about how this fits its mission of organizing and making accessible the world’s information, quickly updates it with a couple of neat features, immediately seems to forget it exists, eventually launches a competitor out of some other part of the company, obviously begins to deprecate it and shift focus to the new competitor, and then, years later, finally shuts it down for real.

This is easily the best description of Google’s ridiculous sunsetting of products: advertise it to infinity as being the next best thing, completely neglect to update it, then axe it years later or build some of its functionality into an existing, more popular product — that’s the Google way. Google Podcasts’ funeral on Tuesday reminded me of my favorite killed Google product: Inbox by Gmail. When it first came out, it was a new, futuristic, mobile-first client for Gmail, not meant to replace it, but to supplement it — and it was fantastic. In the early months, it required an invite from another Gmail user, just like Gmail first did in 2004, building anticipation and excitement for a novel email client that was designed for smartphones first.

Google marketed Inbox to the fullest extent possible, and it had every reason to do so. Inbox pulled information from your Gmail inbox, then surfaced that information as if you searched for it on Google – in other words, organizing the world’s information and making it universally accessible and useful, Google’s mission statement. But, slowly, either due to the lack of adoption or just budget cuts from within the company, Google began haphazardly adding Inbox’s features to the normal Gmail applications, showing small thumbnails for tickets or events in the email list. This by all intents and purposes wasn’t Inbox, but it looked like Inbox, even though it didn’t even come close to replacing it. People began using those features, naturally.

Then, Google said Inbox had run its course and it was time for it to go, redirecting people back to Gmail. Google just wasted five years of everyone’s lives by making them switch to a new email client when it could have simply added the new features to Gmail from the very beginning. This is not just one case — time and time again, Google has continuously wasted customers’ time by announcing and marketing products that will eventually meet the fate of death in the end. And these products are often unceremoniously discontinued, just like in the case of Google Domains, which for years was a competitor to Squarespace that offered lower prices, but in the end, was sold to the very company it aimed to eclipse. This behavior weakens consumer trust.

After just a few of these incidents — Google abandoning or neglecting a product enjoyed by so many — customers no longer trust Google. Case in point, when users on the social media website X circulated a false message “from Google” saying that Google was sunsetting Gmail, and users actually believed it so much that Google had to issue a clarification that Gmail was indeed not going away. Google suffers from such terrible mismanagement that users are no longer incentivized to rely on Google services, and that is a terrible thing for the market and for Google, whose services are relied on by hundreds of millions of people a day.

I don’t think Google will ever discontinue its core products, like Google Search, Gmail, Google Drive, and Android, purely because regulators would probably intervene in those decisions because of how much of an astronomical impact they would have on the economy and society. But Google will continue to disappoint unsuspecting users and funnel them into more lucrative services to extract advertising revenue from them. That is why Google discontinued Podcasts, Inbox, and so many more of its services — it didn’t want to spend money on development and wanted more advertising revenue. Think about it this way: If users are scattered across three different services, selling advertisements on each one becomes (a) more difficult due to the lack of users on each service and (b) less lucrative because those placements will be viewed by fewer people each. Concentration is in Google’s interests.

For now, nobody should place their trust in a new Google product due to Google’s lack of corporate management. Sundar Pichai, Google’s chief executive, has continuously proven himself to be a servile, incompetent leader, and the only reason Google’s market capitalization has grown over the past nine years is due to the market’s expansion — more people are buying smartphones and computers, and more people now need Google services. Pichai has not done a single good thing for Google post-Chrome, a project he helped lead. Silicon Valley start-ups like OpenAI and Anthropic are running laps around Gemini in the artificial intelligence chatbot department — Google’s entry was rushed out of the door to compete with Microsoft, for heaven’s sake — Google Search is overridden with robot-written spam, and Google has no answer to Apple and Meta’s virtual reality products.

Google, by every measurement, is losing its dominance in the technology sector because it has a lack of corporate conviction. The job of a chief executive at such a large corporation as Google isn’t to write code in the break rooms — it’s to inspire the company to do good things, and Pichai has spectacularly failed in this regard. Pichai’s Google is slow to innovate, makes mediocre products, and needs a morale boost. Whoever is able to do that should probably take the top post within the company as soon as possible.

Why Are Tech Companies Donating to Republicans?

Rebecca Crosby and Judd Legum, reporting for Popular Information:

A new investigation by Popular Information, using state and federal campaign finance databases, found that 50 prominent corporations have donated $23,273,400 to the campaigns and political committees of these election deniers since January 6, 2021. Some of the largest contributors to election deniers are also some of the country’s leading companies, including AT&T, Comcast, Walmart, and Microsoft.

It wasn’t supposed to be this way. On January 4, a large group of business leaders signed onto a statement arguing that the planned objections to vote were destructive. “Congress should certify the electoral vote on Wednesday, January 6,” the business leaders wrote. “Attempts to thwart or delay this process run counter to the essential tenets of our democracy.” The Chamber of Commerce, which represents nearly every major corporation in America, released a similar statement.

As Popular Information comprehensively documented, in the aftermath of the attack on the U.S. Capitol, many of these corporations pledged to cut off support to members of Congress who voted to overturn the election.

Each of the mega-corporations Crosby and Legum list in their amazing piece vowed not to donate to election deniers who refused to certify the 2020 election results on January 6, 2021, but nevertheless, they did — in large numbers. Hundreds of thousands of dollars were sent to Republicans who threatened democracy from each company, and when asked about the corporations’ actions, they said that they donated to the campaigns in the interest of bipartisanship.

What I don’t understand is why left-leaning corporations — like Microsoft, which donated $112,500 to 29 election deniers during the 2022 midterm election — that value diversity, equity, and inclusion donated to hard-right Republicans, including House Speaker Mike Johnson of Louisiana, who not only voted to reject President Biden’s electoral win in 2020 but also frequently has derided DEI and corporations that employ it, including Microsoft. Microsoft, according to Popular Information’s report, donated $15,000 to Johnson’s campaign, for some unknown reason. (Microsoft doesn’t lead many of its operations in Louisiana, last I checked.)

These corporations are attempting to play a game of bipartisanship — which, generally, is a good thing, especially in the case of large companies that employ many people with diverse political views — while also donating to and advocating for the causes that these political candidates despise. Corporations have opinions — they should have opinions, especially political ones. Why don’t these traditionally diverse corporations spend their money instead championing Democrats or moderate Republicans who actually advance their agenda?

Walmart was also listed, but I understand Walmart donating to anti-abortion, pro-gun, capitalist Republicans because Walmart is still based in Arkansas and sells firearms at its stores; in other words, Walmart’s company culture is comparatively more conservative than to liberal corporate America. The vast majority of corporate America is liberal because it is impossible to operate a multi-national corporation without valuing diversity, inclusion, and moderately leftist economics.

This goes beyond the election deniers: Why are these companies donating to candidates that they don’t agree with, even in the name of bipartisanship? Instead, they could advocate for new candidates that do advance their preferred agenda — tons of moderate Republicans are still running for the House of Representatives. You don’t see the National Rifle Association paying Democrats to table gun legislation because the NRA knows that is an impossible task. Microsoft will never get a Republican to favor DEI in our current political climate (this probably doesn’t apply to Senator Mitt Romney of Utah-like Republicans), so why is it donating to them?

Liberal enterprises continue to waste their money (or use it counterproductively) whereas lobbying groups are incredibly effective at fund-raising, donating, and advancing their agendas. Maybe corporate America and Wall Street should learn from them to make a better world for all of us, because, in the end, what happens in Silicon Valley and New York affects our country more than what happens in Washington. That’s the United States — corporates win, always.

Threads, X, and the ‘Good Place’

Pamela Paul, writing in an opinion-editorial for The New York Times:

And now, after a mere 10 months, we can see exactly what we built: a full-on bizarro-world X, handcrafted for the left end of the political spectrum, complete with what one user astutely labeled “a cult type vibe.” If progressives and liberals were provoked by Trumpers and Breitbart types on Twitter, on Threads they have the opportunity to be wounded by their own kind.

Threads’ algorithm seems precision-tweaked to confront the user with posts devoted to whichever progressive position is slightly lefter-than-thou. It knows, for example, exactly where — on the left, bien sûr — you stand with regard to the Middle East, gender ideology, D.E.I., body positivity, neurodivergence, Covid and the creative industries and shows you posts screaming from whichever position is just far enough from your own to drive you out of your mind.

In this microverse, arguments you probably didn’t know existed (“Every time I see a white person in a kaffiyeh, I wonder: How much have you studied the issue?”) devolve into accusations around tokenism, solidarity and identity. There is something guaranteed to offend anyone who wants to get offended — or your money back. Confessions of emotional upheaval and mental health crises operate like a kind of currency, a surefire way to accrue cred.

Paul, like most Times columnists, is an idiot. The best phrase that I can use to describe what Paul is facing here is “platform virtue signaling,” a term which I’d like to now coin. Every platform — aside from the ones that don’t have algorithms, such as Mastodon — virtue signals in some way to feature the ideology its creators are biased toward. That is how algorithms work — it’s why ChatGPT is left-leaning and why telling Siri “All Lives Matter” used to give you a lesson about how the phrase is racist (it is). People who work in Silicon Valley are left-leaning — it’s no secret — and the platforms those people make are emblematic of those biases.

One exception since 2023 has been the social media website X, which, when it was called Twitter, used to be a left-leaning source of information. Up until Elon Musk, the billionaire behind Tesla and SpaceX who has made it a goal of his to elect Republicans this year, acquired the platform, Twitter censored right-wing nuts, added disclaimers to misleading tweets that were almost always from right-leaning, and served users tweets from leftists most of the time unless they explicitly followed right-wingers. But since Musk meddled with the algorithm and “relaxed” content moderation criteria, the algorithm has taken a sharp turn to the right — again, emblematic of the creators’ biases.

While you might not notice this on your “For You” feed if you lean to the left, it’s painfully obvious once you delve into a trend on the Explore page. Any trend is instantly inundated with right-wingers posting bombastic conspiracy theories, very few of which have Community Notes on them. It’s difficult to find left-wing discourse on current events on X these days — most trends are filled with commentary from the likes of Catturd, an ardent Make America Great Again follower, and Musk himself. This never used to be the case on Twitter; to find right-wing opinions on the platform, you would often have to dig deep into fringe replies on threads from leftists.

Most of this isn’t actually due to any substantiative algorithmic changes, but because Musk’s X now prioritizes subscribers of the platform’s Premium and Premium+ services, who are almost always followers of Musk. Whenever you open a post on X, blue check-bearing accounts take the top spot, making it almost certain that you’ll be faced with wingnuts first.

But this isn’t even the point: Because X now prioritizes replies from people who subscribe to X Premium, anyone — including those on the left — can pay for a boost in engagement. Many progressive leftists do exactly this to get their point across because, like rightists, their ideas are woefully unpopular amongst the public. They need the extra engagement to stay relevant because their watermelon ideas aren’t the most intelligent nor captivating. And as I said earlier, the “For You” feed prioritizes what you like the most, so if you’re a left-leaning Democrat or independent with moderate views on the Middle East, you’ll be inundated with “from the river to the sea” watermelons — not Zionists.

Again, this changes on the Explore page because it isn’t bespoke in the way the “For You” timeline is, but your main algorithmic feed on X as a leftist will be full of fringe, unpopular, frankly insane ideas from progressives. I’d assume this is the same (but on the opposite side of the spectrum) for people who lean to the right but aren’t full-blown MAGA nut jobs, but I have no way of telling because I mostly post leftist ideas.

This makes X an extremely uncomfortable, polarizing place. People, even on the internet, want to “fit in” with the crowd, so if they have no opinions on the Middle East but do have opinions on socialized health care, for example, they will find many fringe leftists showing support for Hamas or other Islamic terrorist organizations and will be cocooned into a bubble that only supports Palestinians. They might like one or two moderate posts, but that will just throw them into the chaos more as the X algorithm feeds them more posts from blue check-holding fringe left-wing nuts.

X is an extremely polarizing, junk-filled social network, and if you spend any longer than 15 minutes on it, you’ll feel frankly disgusted. You can feel your political opinions becoming more extreme against your will. While I’ve heard, and sometimes agree, with the arguments saying that ideological diversity in social network feeds is important, this is the very opposite of diversity, making moderates either far right or far left — and nowhere in between. Soon, those brainwashed people — many who are younger, like myself — will also begin quote-posting fringe accounts, and the cycle continues.

But of course, back to Paul’s column, which misses this point entirely: Threads, made by Meta’s Instagram, is not a place for intellectual discussion. While X gives way to fringe left- and right-wing nuts because it promotes real-time news conversation — a relic of the former Twitter — Threads sharply veers away from any kind of news conversation. Earlier in March, Instagram and Threads began filtering “political” content from the “For You” feeds automatically, with an option to turn off filtering buried in the Instagram application if users explicitly chose to view political content. While radical leftists on X cooked up conspiracy theories that because Mark Zuckerberg, Meta’s chief executive, is Jewish, he wanted to silence pro-Palestine views, Meta’s content chiefs said it was because they didn’t want to be involved with political content anymore.

This dissolution of politics from the world’s largest social media company is incredibly dangerous during an election year but also makes the platforms useless for any kind of discussion that requires social skills or historical knowledge. Threads is filled with the most non-intellectually stimulating, boring, screenshotted content which makes it feel just like Instagram, a platform used by some of the dumbest people with internet access. Twitter used to feel intelligent — Threads does not.

Even if you follow many people — I follow over 150 — you’ll find that your timeline is filled with the most mundane, boring, “viral” content. For instance, when Threads rolled out a feature to view trending topics similar to X earlier in March, the top trending phrase was “Spring Equinox,” complete with a flower emoji. It’s truly the most bottom-level, boring content to exist on a social network. Observing what’s trending on Threads doesn’t make you feel like the platform is feeding you a political ideology — let alone a progressive one like Paul claims — it just makes you feel cheap. “Is this really what I’m doing with my time?

The more time you spend on Threads, you feel like the platform is increasingly out of touch with our world climate. Sure, X is filled with neo-Nazis, terrorist sympathizers, communists, and fascists, but Threads is filled with so-called “aggregator” accounts posting heavily compressed screenshots of old tweets. You know these tweets are old because they still feature the old Twitter interface, with the old blue checkmarks, fonts, and icons. Many of these tweets, I’ll admit, are funny and relatable, and that is exactly why they go viral on Threads, sometimes gaining tens of thousands of likes. But Threads doesn’t even give politics or world events the chance to go viral — the only political content I see garner attraction is threads from President Biden and his campaign accounts.

On Threads, you’ll find most of your time is occupied by looking at badly edited photographs of sunsets from anonymous aggregator meme accounts, mildly interesting Tumblr posts from 2015, or a life story about some writer’s husband who nobody has ever heard of. In other words, it’s Instagram. That is not to say the people on Threads are accustomed to Instagram — as I’ve said many times in the past, I think most daily users are Twitter expatriates — rather, the reason intellectual conversation doesn’t get bumped on Threads is that the algorithm is not conducive to it.

I quipped many months ago on Threads (I would’ve linked to the thread if Threads had even a moderately useful search feature) that the threads you spend the least time on get the least engagement, whereas quick fiery reactions to quote posts often get the most. Spending minutes crafting a thought about something burning and perhaps linking to a reputable news article to supplement your points will get perhaps two or three likes from your followers, but it will never be pushed out into the wider Threads world. However, posting a quick quip about someone’s life story will end up getting hundreds of likes — bonus points if you add an image and don’t use any profanity.

Because the algorithm operates like this, the people on Threads slowly get dumber and less sharp, and the quantity of thought-provoking posts falls off a cliff. Is that better than obscure political opinions on X? That’s up to you to decide. But, to Paul: Threads is easily the least combative social network — politically, that is. (Don’t dare besmirch Meta on Threads unless you want to have a bad afternoon.)

Gruber: ‘Why are iOS users required to buy iPhones?’

John Gruber, writing on Daring Fireball:

As I wrote this week, there aren’t many un-installable apps on iOS… Vestager makes clear in her remarks what wasn’t clear in the EC’s announcement of the investigation: they have a problem with Photos… Photos is not just an app on iOS; it’s the system-level interface to the camera roll… Vestager is saying that to be compliant with the DMA, Apple needs to allow third-party apps to serve as the system-level camera roll. That is a monumental demand, and I honestly don’t even know how such a demand could be squared with system-wide permissions for photo access. This is product design, not mere regulation. Why stop there? Why not mandate that Springboard — the Home Screen — be a replaceable component? Or the entire OS itself? Why are iPhone users required to use iOS? Why are iOS users required to buy iPhones?

I’ve said this earlier, mostly as a joke, but I don’t think Gruber’s remarks here are very serious either — they’re mainly rhetorical. But from the way the European Union is handling compliance with the Digital Markets Act — not even the actual law itself, which is flawed in many ways — I can’t help but think the European Commission wants a seat in Apple’s research-and-development or engineering department.

If you asked me a year ago, “Do you think the European Union would mandate Apple to allow users to install Android on iPhones a decade from now?” I would’ve laughed in your face. Now? In a decade, anything is possible with the European Union, a body that ultimately is capitalist for its own benefit but is trying to play a hilarious game of socialism. The problem, according to Margrethe Vestager, the commission’s executive vice president responsible for technology regulation, and Thierry Breton, the antitrust commissioner of the European Union, is caused by the European Union assuming control of digital platforms. It’ll go to any length to exercise its stolen control.

The browser choice screen, which Vestager and Breton have launched an investigation into, is impartial, unbiased, and designed as elegantly as possible, but apparently, that’s not enough for the two top dogs in the European Commission — even though the law they wrote doesn’t classify Apple’s implementation as illegal. Is there no court in the European Union? Of course there is — the European Union has a full judicial branch of the government, the commission is just the executive branch. Why doesn’t the commission take this case to the courts and let a jury settle this instead of launching a stupid investigation to scare companies into changing things?

And about the Photos app: The European Commission is clearly full of technology-illiterate old people, to the point where it isn’t even able to do its own due diligence to understand that the Photos app is a core part of iOS. If the commission actually had an interest in developing meaningful technology regulation, it would probably hire experts in the field. Again, the problem isn’t the Photos app, just as it isn’t about the browser choice screen — if Apple made the Photos app un-installable tomorrow, Breton would throw himself a party, post a celebratory selfie on social media naming himself the sole provider of freedom for Europeans, then launch an investigation into why the Camera app isn’t un-installable the next morning. The commission will continue to move the goalposts; it’s playing a one-sided, rigged game while laughing manically in the corner at everyone falling face-flat on the ground. It’s full of ego and every last one of its commissioners are narcissistic maniacs.

It’s not worth it to spend more time writing about the European Union’s nonsense. Gruber’s whole piece is a follow-up to an earlier story he posted about the possibility that Apple could leave the European Union. Despite what E.U. fanboys might have you think, Apple could leave the bloc at any time and royally screw its citizens — and as soon as it does that, the commission will sue Apple (or, God forbid, launch one of its “investigations”) for some reason even though not doing business somewhere isn’t illegal. Apple has the upper hand not because it’s a monopoly but because it makes products Europeans love. Maybe those Europeans should talk some sense into Brussels this year.

Apple is in full compliance with the DMA — it’s obvious. But no matter what Apple does, it’ll never be able to change the commission’s mind. It’s obvious in the verbiage of the DMA, the commissioners’ psychotic behavior on the internet1, and how the executive branch of one of the world’s superpowers applies laws to the world’s leading technology corporations.


  1. I mean, seriously, this is not even an exaggeration. Please look at this insane behavior — what is this? It’s a bunch of elderly white people in suits standing in front of a projector screen smiling, and then a caption saying: “Not all heroes wear capes.” What form of auto-fellatio am I looking at here? Even a firefighter who saved a whole family from a burning house wouldn’t exhibit this much arrogance. Can we get a psychiatrist to Brussels, please? ↩︎

Apple Sues Employee Accused of Leaking Secrets to The Wall Street Journal

Joe Rossignol, reporting for MacRumors:

Apple this month sued its former employee Andrew Aude in California state court, alleging that he breached the company’s confidentiality agreement and violated labor laws by leaking sensitive information to the media and employees at other tech companies. Apple has demanded a jury trial, and it is seeking damages in excess of $25,000…

In April 2023, for example, Apple alleges that Aude leaked a list of finalized features for the iPhone’s Journal app to a journalist at The Wall Street Journal on a phone call. That same month, The Wall Street Journal’s Aaron Tilley published a report titled “Apple Plans iPhone Journaling App in Expansion of Health Initiatives.”

Using the encrypted messaging app Signal, Aude is said to have sent “over 1,400” messages to the same journalist, who Aude referred to as “Homeboy.” He is also accused of sending “over 10,000 text messages” to another journalist at the website The Information, and he allegedly traveled “across the continent” to meet with her.

The fact that this former Apple employee had the journalist saved as “Homeboy” in his contacts is cringeworthy. And the fact that Aude traveled across the continent to meet with this Information reporter makes me think there was (is?) something personal between the two. Seriously, 10,000 text messages seems peculiar — perhaps that’s worth looking into in regards to journalistic integrity.

Apple believes that Aude’s actions were “extensive and purposeful,” with Aude allegedly admitting that he leaked information so he could “kill” products and features with which he took issue. The company alleges that his wrongful disclosures resulted in at least five news articles discussing the company’s confidential and proprietary information. Apple says these public revelations impeded its ability to “surprise and delight” with its latest products.

This is ridiculous. Apple alleges Aude leaked the information to The Wall Street Journal and The Information so that he could “kill products and features with which he took issue.” It’s almost unbelievable — first that Aude is so stupid that he thought the public catching wind of unreleased features would end up killing them somehow, and second that he thinks leaking information is a more appropriate way to address his concerns than speaking to his superiors within the company. I’m very curious as to how Aude landed a job at Apple with this level of idiocy.

In a November 2023 interview, Apple alleges that Aude denied leaking confidential information to anyone. However, during that interview, Apple alleges that Aude went to the bathroom and deleted “significant amounts of evidence” from his work iPhone, including the Signal app that he used to communicate with “Homeboy.”

This is easily one of the most hilarious labor disputes of all time. Once Aude was caught red-handed, he didn’t — I don’t know — admit to the act, deny wrongdoing, or find some other way to rescue himself. He, like a 7-year-old child caught with their hands in the cookie jar, went to the bathroom and deleted the chats from Signal that he had on his work phone in a hurry. I truly have not encountered an Apple employee who was this stupid before; why would any moderately intelligent person leak information to the press on a corporate-monitored work phone? And if someone were to do that, why would they save the chats or applications they used to leak information?

This whole situation is beyond parody. What a total moron — and good on Apple for catching on and suing.

Thoughts on Humane’s New Ai Pin ‘Video Handbook’

Quinn Nelson, producer of the technology YouTube channel Snazzy Labs, posted on the social media website X a link to Humane’s new owner’s guide video, which, according to Bethany Bongiorno, Humane’s chief executive, was meant for Ai Pin buyers “to help them understand how to use” their Ai Pins before they arrive next month. Bongiorno said she would speak with her team about putting it up on YouTube, which I think is a good idea since I feel it’s the most interesting demonstration of the device yet. It’s produced well, the presenters are knowledgeable, it doesn’t have any discernibly sloppy mistakes, and it’s the most lengthy, detailed walkthrough of the Ai Pin’s features yet. I watched the 30-minute video after my slamming of the device in November to try to learn more about the Ai Pin more, and I recommend everyone do too — it’s what Humane’s initial announcement should’ve been.

But that’s just criticism of the video. The product, the Ai Pin — an artificial-intelligence-powered lapel pin with a projector, camera, microphone, and speaker — is still lackluster at best. The video was broken up into a few sections: hardware and accessories, voice interactions, the camera and images, the Laser Ink Display — essentially a projector that displays an image onto a user’s hand — music, memory, telephone calls and text messages, and “Humane.Center,” the website used to control the Ai Pin.

  1. Hardware: The battery booster appears to be compulsory in most cases and enables what Humane calls the “Perpetual Power System,” which, candidly speaking, is buzzword-filled nonsense. It’s a battery — everyone knows what a battery is — system with the ability to hot-swap boosters that clip to the underside of a shirt, holding the device in place. When the Ai Pin was fastened to a long-sleeve shirt, it didn’t pull it down, which was relieving, but Humane also sells an optional, lighter clear plastic attachment to replace the booster in case a user happens to be wearing something extra lightweight, such as workout clothing. Humane didn’t show the device clipped to a T-shirt, though, which is the most common article of clothing it’ll be attached to, and the presenters mostly wore long-sleeve jackets — for which there is a clip that can be fastened to thicker coats — and sweaters, which is concerning. (Maybe this is just because it’s spring.)
  2. Voice interactions: As demonstrated in previous Humane videos, the primary method of interaction is the voice assistant, which is accessed via the touchpad and a series of gestures. There are simply way too many gestures — they’re all variations of tapping or holding down one or two fingers to activate certain features like the camera or laser projector. And again, I do not understand the point of having such an assistant attached to a shirt — the Action Button on iPhone 15 Pro does the same thing. The assistant was also slow at times, requiring presenters to continue to speak to the camera as they waited for the assistant to give a response to distract from the deafening silence of a computer sending queries to a server. It also seems to take a while for internet-related queries, such as searching for the weather. A smartphone seems like a more cost-effective and less-distracting option for most — especially when in public.
  3. Camera: The Ai Pin acts like a more personal version of Google Lens, and I think it’s fascinating. This is the most compelling use case for the product yet since carrying a smartphone around for quick spur-of-the-moment shots is often cumbersome. Sometimes, something needs to be captured instantly, without distraction, and the camera on the Ai Pin executes this perfectly. (The quality of the produced images isn’t spectacular, but it’s a small device.) I also liked the feature where you can point the device at anything, such as a book or building, and have the voice assistant provide information about it, but I’d much rather be able to view and read this information rather than have a voice narrate it to me via a loudspeaker that everyone around me can hear.
  4. The Laser Ink display: The only way of visually viewing and interacting with information from the Ai Pin is the Laser Ink display, as Humane calls it, a projector that activates when the device is asked a question and detects a palm out in front of it. The laser projector, while bright, seems less than ideal for dense, small text, since it isn’t very crisp — especially in broad daylight. Also, palm space is limited, so the device can only project small messages and large interface controls. Navigating the interface requires quite a bit of skill, too. There is a singular solution to all of this: a smartphone. Hundreds of millions of people worldwide carry 6-inch bright, crisp, colorful organic-LED displays with powerful processors and high pixel densities in their pockets daily, and the Ai Pin seems like a compromised, unnecessary version of a technology that already exists. The Ai Pin’s laser display is worthless.
  5. Music: Anyone who chooses to listen to music on this lapel pin is a psychotic human being.
  6. Memory: The usefulness of this “memory” feature — which exists due to the nature of AI large language models, such as the one from OpenAI that powers the Ai Pin — is minimal because it does not interact with iOS or Android at all. Most people communicate with others and store quick notes on their smartphones, and thus, their corpus of human connections and personal anecdotes is stored in one locked-down place. Humane has no plan to access that corpus — instead, it’s relying on people to use the Ai Pin exclusively to send text messages, make phone calls, and store quick notes. (Apparently, its own employees can’t even use the Ai Pin’s notes feature exclusively.) The “memory” features of the voice assistant — which come into play when a user asks questions like, “Catch me up on message conversations,” or, “Where did I park?” — will only be useful if someone decides to store their life’s information on their Ai Pin rather than their phone, a behavior I don’t think anyone, not even Humane’s diehard users, will partake in.
  7. Telephone calls and messages: Continuing on the previous theme, the Ai Pin does not connect to a user’s smartphone whatsoever — Humane instead encourages users to make telephone calls, join group messages, and do all of their communication via the Ai Pin, which isn’t even possible, since it doesn’t support most messaging services like Slack or WhatsApp at launch. It’s a ludicrous strategy that will never take off — period. The fact that Humane thinks anyone will choose to have their phone calls on a loudspeaker in public or use an AI voice assistant to write text messages is so astonishing to me. On a related note, did you know some companies sell telephones that you can take anywhere and that also happen to connect to all the instant messaging services in the world? You can get one for less than the price of an Ai Pin — groundbreaking.
  8. Humane.Center: There is not even a smartphone app to manage the Ai Pin, which seems like it would be the most basic of requirements for any internet-connected product made in 2024. Humane doesn’t think so, instead developing a web portal for access to user data. This website is the only way to access images taken with the device, add contacts, view full text message threads and call logs, and change settings, like connecting to a Wi-Fi network or adding “integrations,” Humane’s term for third-party software. The on-device projection interface is so lackluster and limited that I don’t think anyone would seriously want to use it — and waving a palm around in the air seems like it would feel like a royal pain after more than a minute — so the only way to interact with the information the Ai Pin provides is a website. It’s just insulting.

So yes, I’m still not bullish on the Ai Pin. It’s a bad smartphone that does less than a smartphone while being slower than one and being more annoying than any other modern consumer product. And it’s $800 with a $25-a-month subscription for a second phone number and no phone integration. Great video, terrible product. Go back to the drawing board, Humane — but please do publish this video on YouTube.

The X Baltimore Bridge Conspiracies Are Unhinged

David Gilbert, writing for Wired:

Conspiracists and far–right extremists are blaming just about everything and everyone for the Baltimore bridge collapse on Tuesday morning.

A non-exhaustive list of things that are getting blamed for the bridge collapse on Telegram and X include: President Joe Biden, Hamas, ISIS, P Diddy, Nickelodeon, India, former President Barack Obama, Islam, aliens, Sri Lanka, the World Economic Forum, the United Nations, Wokeness, Ukraine, foreign aid, the CIA, Jewish people, Israel, Russia, China, Iran, Covid vaccines, DEI, immigrants, Black people and lockdowns.

The Francis Scott Key truss bridge actually collapsed when the MV Dali cargo ship collided with one of the bridge supports. Six construction workers, who were filling potholes on the bridge at the time, are presumed dead. The ship is owned by Singapore-based Grace Ocean Private Ltd, and the 22-person crew were all Indian. The ship was on route to Colombo, Sri Lanka at the time of the accident.

X, the social media website owned by no one else other than Elon Musk, the billionaire who has made an effort to push the dangerous great replacement conspiracy theory on his website, has been inundated with nonsense comments from blue-check-bearing accounts with prioritized replies. Representative Marjorie Taylor Greene, Republican of Georgia, had her account reinstated however many months ago on X after the previous Twitter ownership permanently suspended it over her barbaric coronavirus conspiracy theories — now, she insinuated the collapse of the bridge could be due to a terrorist attack. (It was not a terrorist attack.)

Leftist, progressive users have blamed the attack on U.S. support of the wars in Israel and Ukraine while also complaining about how overfunded the Defense Department is. The bridge collapsed due to a Singaporean cargo ship colliding into a pillar of the bridge — how that is the Defense Department’s fault is beyond me. Meanwhile, right-wing nuts continued to blame the president and Transportation Secretary Pete Buttigieg for spreading non-existent “misinformation.” Very few of these posts — exclusively on X and former President Donald Trump’s social media website, Truth Social — had Community Notes pinned to them, presumably because most intelligent users authorized to write notes have better things to do than debunk bizarre antisemitic conspiracies on a dying social media platform.

However, less-intelligent conspiracy theorists continue to brainwash teenagers, the elderly, and anyone who gets their news exclusively on X — all to push their political propaganda. The owner of the website, Musk, embraces it in the name of “free speech” and the “First Amendment” without actually having the intellectual capacity to understand that the Constitution only applies to the government, not private platforms. Of course, none of Musk’s sycophantic followers will understand this quirk of the legal system, though, so we’re instead stuck with the most popular real-time news website peddling racist conspiracies until enough people move to Threads, Meta’s clone of X.

If this is how it is when a bridge collapses and kills six people, imagine how it’s going to be on Election Day when half the population’s preferred candidate loses — whomever that may be.

The Wall Street Journal Profiles Phil Schiller

Aaron Tilley and Kim Mackrael, in a profile of Phil Schiller, Apple’s former senior vice president of product marketing and now fellow, for The Wall Street Journal:

Apple came around to taking a 30% commission on paid apps or services purchased in the App Store. Initially, Jobs said in 2008 that the company didn’t “intend to make money off the App Store,” according to documents that came out in the Epic case.

After Jobs’s passing in 2011, Schiller kept Jobs’s philosophy alive across everything he did. The two were close, and Schiller often mirrored Jobs’s fierce competitiveness and tendency to praise Apple and disparage competitors. Inside Apple, he came to be referred to as Jobs’s “mini-me” due to the manner in which he often mirrored the company co-founder’s perspective.

“Of the people still at Apple, he is one of the few that still carry the torch of Steve Jobs’s vision,” said Tim Bajarin, a longtime Apple analyst who has known Schiller since his return to the company.

One thing Jobs insisted on in the App Review process is that the company should always have someone reviewing each app that made it into the store. Schiller continued that tradition, eschewing excessive use of artificial intelligence in favor of reviews and careful curation.

If I have this right, Steve Jobs, Apple’s co-founder who insisted on Apple’s tight control over the iOS App Store, only craved control over the apps that were on the App Store. As Tilley and Mackrael quote Jobs saying in 2008, Jobs never wanted to make money from the App Store’s 15–30 percent commission — he just wanted the control that came with that commission. Now that Apple is in hot water over the commission, which in my opinion is what started all of this regulatory scrutiny both in the European Union and the United States, I suggest it lower the percentage it takes to 15 percent for companies that make over $1 million a year on the App Store, and 7 percent for everyone else.

Apple doesn’t need to give up control over the App Store — it just needs to make it seem like the App Store is competitive (which it already is). The 30 percent commission has done irreparable damage to Apple’s public relations over the last several years. Anyone, even people who like Apple and think it deserves a cut of purchases, can agree that the App Store’s rules are a mess. In addition to lowering the fee, I think Apple should also further relax its anti-steering provisions specifically in the vein of payment processing. Apple has (had) to play some bargaining here if it doesn’t (didn’t) want to be caught by the ire of regulators, including the U.S. Justice Department. If it doesn’t give up the anti-steering provisions, it risks losing control over content moderation in the App Store specifically in the United States — the European Union has already busted Apple’s shackles.

Regulators are not even nearly as smart as Apple — everyone knows that. But Apple missed its chance to self-regulate, to give a little and take a little, even when relaxing anti-steering provisions would’ve still fallen within the bounds of Jobs’ App Store ethos set out in 2008.

PS: I still love Schiller.

‘Cowardly Snowflake’

Sarah Jong, writing for The Verge about United States v. Apple:

From cloud streaming games to CarPlay, the DOJ complaint tries to rope in the burning grievances of every kind of nerd and then some. The only thing that’s missing is a tirade on how ever-increasing screen sizes are victimizing me, a person with small hands. (At the Thursday press conference, Attorney General Merrick Garland made no mention of how Sarah Jeong would like to see the SE return to its 2016 size.)

You can almost forget this is a lawsuit and not just the compiled observations of a single very motivated poster in The Verge comments section — until you get to page 57. There, the document suddenly changes voice, finally pivoting into a formal communication to a judge. “Mobile phones,” the complaint reads primly, “are portable devices that enable communications over radio frequencies instead of telephone landlines.”

The lawyers who wrote the Justice Department’s complaints against Apple would make for great technology bloggers — even better than me, dare I say. Together, they should create a new blog: Cowardly Snowflake. It’s like Daring Fireball, but written by people who don’t know what they’re talking about. I’d instantly subscribe.

The first part of the Justice Department’s complaint truly reads like a non-fiction “airing of grievances.” It reminds me of the Declaration of Independence, but instead of making good points against the British monarchy, it serves as a poorly researched fantasy of the technology landscape. Now that’s blog-worthy. Seriously, if you have three hours to kill, I’d recommend reading the entire thing just for fun.

Thursday’s United States v. Apple Lawsuit is the ‘Beeper Lawsuit’

Yours truly, writing in January about Beeper, a cross-platform messaging app that aimed and failed to add iMessage to its arsenal of services:

Shortly after Apple revoked Beeper’s unauthorized access to the iMessage service, Senator Elizabeth Warren of Massachusetts posted the following to the social media website X, quoting The Verge’s article reporting on the changes Apple made: “Green bubble texts are less secure. So why would Apple block a new app allowing Android users to chat with iPhone users on iMessage? Big Tech executives are protecting profits by squashing competitors. Chatting between different platforms should be easy and secure.”

A week later, Senators Amy Klobuchar of Minnesota and Mike Lee of Utah; and Representatives Jerry Nadler of New York and Ken Buck of Colorado wrote a bipartisan letter to Assistant Attorney General Jonathan Kanter calling for the Justice Department to “investigate whether this potentially anticompetitive conduct by Apple violated antitrust laws.” “This” conduct refers to Apple’s immediate shutdown of Beeper Mini. The members of Congress collectively write: “We write regarding Apple’s potential anticompetitive treatment of the Beeper Mini messaging application. We have long-championed increased competition, innovation, and consumer choice in the digital marketplace. To protect free and open markets, it is critical for the Antitrust Division to be vigilant in enforcing our antitrust laws… We are therefore concerned that Apple’s recent actions to disable Beeper Mini harm competition, eliminate choices for consumers, and will discourage future innovation and investment in interoperable messaging services.”

In other words, the letter tells the Justice Department to investigate Apple for locking its doors to thieves. There are two main points to untangle here: that the members of Congress show apparent illiteracy in both antitrust law and technology, and that opening up messaging ecosystems is not a job of the government. It is quite obvious that these members of Congress have no clue what Beeper did to gain access to the iMessage service — nor have any interest in finding out — and that Beeper’s Migicovsky brainwashed the members into taking congressional action against Apple as retaliation for destroying Beeper’s flawed-from-the-start business model. Speaking of Migicovsky, he promoted the letter with his own commentary on X shortly after it was published. It does not require any knowledge of government lobbying to conclude that Migicovsky — and perhaps some of his cohorts — lobbied the members of Congress to get the letter published for publicity.

Thursday’s lawsuit is a direct consequence of Klobuchar, Lee, Nandler, and Buck’s letter hitting Kanter’s desk. Kanter, who leads the antitrust division of the Justice Department, filed the lawsuit yesterday — his name is listed on the suit. Due to Beeper’s aggressive government lobbying on Capitol Hill, the members of Congress wrote the letter to Kanter, who then was brainwashed by Beeper’s marketing speak and told his technology-illiterate aides to write a poorly researched, ill-informed complaint against the world’s largest technology firm.

Furthermore, the complaint includes this passage, as I wrote in my annotation Thursday:

Recently, Apple blocked a third-party developer from fixing the broken cross-platform messaging experience in Apple Messages and providing end-to-end encryption for messages between Apple Messages and Android users. By rejecting solutions that would allow for cross-platform encryption, Apple continues to make iPhone users’ less secure than they could otherwise be.

Not only is this passage entirely false, but it also reeks of Beeper and Eric Migicovsky, Beeper’s chief executive, directly influencing the lawsuit. Migicovsky himself found this uncanny, and on the social media website X, posted: “This DOJ v Apple lawsuit is basically Eric Migicovsky v Apple. I swear I did not do this on purpose,” referring to the Justice Department. Migicovsky wrote this in response to a passage from the lawsuit which essentially served as a call-out to Pebble, the now-defunct smartwatch company Migicovsky founded that brought him into the spotlight, in the “Smartwatches” section. Migicovsky also backs the Justice Department’s incorrect complaints about Beeper Mini up on X, saying he “couldn’t have said it better” himself.

Beeper did not “fix” broken cross-platform messaging — that is what Beeper wants you to believe, but it isn’t what happened. Beeper infiltrated Apple’s private iMessage service meant to serve as a selling point for Apple devices and sold access to it with a subscription. Beeper is not a “third-party developer,” Beeper is a thief. A third-party developer (keyword: “developer”) would refer to someone who gains authorized access to Apple services to create products on Apple’s platforms. Beeper is not a developer — it is a company with the sole intention of profiting from another corporation’s infrastructure. The Justice Department is supposed to serve as the just and correct arbiter of conflicts. Instead, it has chosen to pick favorites in one of the most important lawsuits it has filed in its entire existence because some scrappy start-up founded by a failed smartwatch manufacturer lobbied Congress.

Without even describing the full facts to the court in the lawsuit, the Justice Department aims to sell a one-sided story to the jury that is simply factually incorrect. I hope and assume Apple will fight this moot, incorrect point in court to the fullest extent possible. Lying government lobbyists’ words don’t belong in a court of law — they belong in a concessions stand outside the Capitol in Washington selling T-shirts. If TikTok did this, it’d be banned in the United States a week from now.

Annotating United States v. Apple (2024)

The Justice Department, writing in its lawsuit against Apple filed on Thursday:

For example, by denying iPhone users the ability to choose their trusted banking apps as their digital wallet, Apple retains full control both over the consumer and also over the stream of income generated by forcing users to use only Apple-authorized products in the digital wallet. Apple also prohibits the creation and use of alternative app stores curated to reflect a consumer’s preferences with respect to security, privacy, or other values. These and many other features would be beneficial to consumers and empower them to make choices about what smartphone to buy and what apps and products to patronize. But allowing consumers to make that choice is an obstacle to Apple’s ability to maintain its monopoly.

Has the Justice Department forgotten that Apple is a private corporation?

Apple inflates the price for buying and using iPhones while preventing the development of features like alternative app stores, innovative super apps, cloud-streaming games, and secure texting.

Samsung’s flagship handset is more expensive than Apple’s, but go on about “inflating the price for buying and using iPhones.”

Apple’s U.S. market share by revenue is over 70 percent in the performance smartphone market—a more expensive segment of the broader smartphone market where Apple’s own executives recognize the company competes—and over 65 percent for all smartphones. These market shares have remained remarkably durable over the last decade.

“By revenue?” What nonsense! Is this how the Justice Department concluded Apple is a monopoly?

Following that consent decree in October 2003, Apple launched a cross-platform version of iTunes that was compatible with the Windows operating system. As a result, a much larger group of users could finally use the iPod and iTunes, including the iTunes Store. The iTunes Store allowed users to buy and download music and play it on their iTunes computer application or on the iPod. Apple benefited substantially from this new customer base. In the first two years after launching the iPod, Apple sold a few hundred thousand devices. The year after launching a Windows-compatible version of iTunes and gaining access to millions more customers, Apple sold millions of devices. Apple went on to sell hundreds of millions of iPod devices over the next two decades.

The Justice Department attributes the iPod’s success to its consent decree against Microsoft.

Third, Apple uses these restrictions to extract monopoly rents from third parties in a variety of ways, including app fees and revenue-share requirements. For most of the last 15 years, Apple collected a tax in the form of a 30 percent commission on the price of any app downloaded from the App Store, a 30 percent tax on in-app purchases, and fees to access the tools needed to develop iPhone native apps in the first place. While Apple has reduced the tax it collects from a subset of developers, Apple still extracts 30 percent from many app makers.

“Monopoly rents” is an interesting way of describing a fee for services the App Store provides. Warranted or not, we live in the United States — a capitalist country — and the market decides what’s sane or not. Not the government.

Apple recognizes that super apps with mini programs would threaten its monopoly. As one Apple manager put it, allowing super apps to become “the main gateway where people play games, book a car, make payments, etc.” would “let the barbarians in at the gate.” Why? Because when a super app offers popular mini programs, “iOS stickiness goes down.”

Apple does not need to host content it doesn’t want to host for whatever reason. If you don’t like that, build your own phone. I like iOS, so I’ll live with the rules. It seems like my fellow iOS users agree with me.

That is not a monopoly — that’s just business.

Apple did not respond to the risk that super apps might disrupt its monopoly by innovating. Instead, Apple exerted its control over app distribution to stifle others’ innovation. Apple created, strategically broadened, and aggressively enforced its App Store Guidelines to effectively block apps from hosting mini programs. Apple’s conduct disincentivized investments in mini program development and caused U.S. companies to abandon or limit support for the technology in the United States.

The section this excerpt is from can be called the “WeChat Section,” and yet WeChat, the prime example of Apple “abusing its monopoly power,” remains on the App Store today. This is nonsense.

Until recently, Apple would have required users to download cloud streaming software separately for each individual game, install identical app updates for each game individually, and make repeated trips to Apple’s App Store to find and download games. Apple’s conduct made cloud streaming apps so unattractive to users that no developer designed one for the iPhone.

Keywords: “Until recently.” “You aren’t speeding now, but we’ll still write you a ticket for speeding because we think you sped before.”

Apple undermines cloud gaming apps in other ways too, such as by requiring cloud games to use Apple’s proprietary payment system and necessitating game overhauls and payment redesigns specifically for the iPhone.

“…requiring cloud games to use Apple’s proprietary payment system” is wrong as of a few weeks ago.

While all mobile phones can send and receive SMS messages, OTT only works between users who sign up for and communicate through the same messaging app. As a result, a user cannot send an OTT message to a friend unless the friend also uses the same messaging app.

How is that last part Apple’s fault?

And when users receive video calls, third-party messaging apps cannot access the iPhone camera to allow users to preview their appearance on video before answering a call.

There is an application programming interface built within iOS for developers to be able to add that functionality to their apps.

“Many non-iPhone users also experience social stigma, exclusion, and blame for “breaking” chats where other participants own iPhones. This effect is particularly powerful for certain demographics, like teenagers—where the iPhone’s share is 85 percent, according to one survey. This social pressure reinforces switching costs and drives users to continue buying iPhones—solidifying Apple’s smartphone dominance not because Apple has made its smartphone better, but because it has made communicating with other smartphones worse.

That is not Apple’s fault. Try suing Lamborghini because someone got made fun of for not having a Lamborghini.

Recently, Apple blocked a third-party developer from fixing the broken cross-platform messaging experience in Apple Messages and providing end-to-end encryption for messages between Apple Messages and Android users. By rejecting solutions that would allow for cross-platform encryption, Apple continues to make iPhone users’ less secure than they could otherwise be.

That’s the Beeper reference.

In 2013, when Apple started offering users the ability to connect their iPhones with third-party smartwatches, Apple provided third-party smartwatch developers with access to various APIs related to the Apple Notification Center Service, Calendar, Contacts, and Geolocation. The following year, Apple introduced the Apple Watch and began limiting third- party access to new and improved APIs for smartwatch functionality.

Apple is a private corporation. Nobody is entitled to access to iOS. As I pointed out earlier on Thursday, the smartwatch argument is the only somewhat sound argument, legally speaking, but it’s still woozy.

Apple instead requires these users to disable Apple’s iMessage service on the iPhone in order to use the same phone number for both devices. This is a non-starter for most iPhone users.

Great work, Justice Department, you just contradicted your entire “Messaging” section with one sentence.

Thus, switching to a different smartphone requires leaving behind the familiarity of an everyday app, setting up a new digital wallet, and potentially losing access to certain credentials and personal data stored in Apple Wallet.

Moving to a new house requires learning where the bathroom is again.

The exclusionary and anticompetitive acts described above are part of Apple’s ongoing course of conduct to build and maintain its smartphone monopoly. They are hardly exhaustive. Rather, they exemplify the innovation Apple has stifled and Apple’s overall strategy of using its power over app distribution and app creation to selectively block threatening innovations.

“Hardly exhaustive,” probably because the Justice Department hasn’t pointed out a single “act” where Apple abuses its non-existent monopoly power.

These subscriptions [sic] services can also increase switching costs among iPhone users. If an Apple user can only access their subscription service on an iPhone, they may experience significant costs, time, lost content, and other frictions if they attempt to switch to a non-Apple smartphone or subscription service.

It’s a crime to do business in the United States in 2024, according to the federal government.

Apple has told automakers that the next generation of Apple CarPlay will take over all of the screens, sensors, and gauges in a car, forcing users to experience driving as an iPhone-centric experience if they want to use any of the features provided by CarPlay.

Nobody is forcing drivers to experience driving as an iPhone-centric experience. This is purely uneducated. CarPlay does not supplant an automaker’s interface, it just supplements it for iOS users who would like access to the Apple-made interface. A user is always able to disable CarPlay or opt out of using it if they don’t want to use it or don’t have an iPhone. Why did the Justice Department choose the most technology-illiterate people to file a lawsuit against the world’s largest technology corporation?

Apple’s conduct extends beyond just monopoly profits and even affects the flow of speech. For example, Apple is rapidly expanding its role as a TV and movie producer and has exercised that role to control content.

Sometimes you read something so incomprehensibly stupid that it just leaves you speechless.

“If Apple wanted to, Apple could allow iPhone users to send encrypted messages to Android users while still using iMessage on their iPhone, which would instantly improve the privacy and security of iPhone and other smartphone users.

Apple already does that. You can download WhatsApp for free on iOS today.

Apple has monopoly power in the smartphone and performance smartphone markets because it has the power to control prices or exclude competition in each of them.

The way the Justice Department calculated that is wrong — it calculated it by revenue, as stated earlier. Apparently, it’s illegal to be good at business and make a profit in the United States.

For example, if an iPhone user wants to buy an Android smartphone, they are likely to face significant financial, technological, and behavioral obstacles to switching. The user may need to re-learn how to operate their smartphone using a new interface, transfer large amounts of data (e.g., contacts), purchase new apps, or transfer or buy new subscriptions and accessories.

Exhibit B: When moving to a new house, you need to learn where the bathroom is again.

Many prominent, well-financed companies have tried and failed to successfully enter the relevant markets because of these entry barriers. Past failures include Amazon (which released its Fire mobile phone in 2014 but could not profitably sustain its business and exited the following year); Microsoft (which discontinued its mobile business in 2017)…

Due to Apple’s monopolistic practices, the Windows Phone and the Fire Phone both failed, according to the Justice Department. I am not making this up.


What a stupid lawsuit. I annotated this lawsuit as I was reading it on Threads and Twitter, too, if you’d like highlighted images of the excerpts.

More on United States v. Apple’s ‘Walled Garden’ Problem

Victoria Song, writing for The Verge:

The DOJ also notes that Apple limits third-party messaging apps like WhatsApp, Signal, and Facebook Messenger in comparison to iMessage. For example, you have to dive into permissions to let these apps operate in the background or access the iPhone’s camera for video calls. They also can’t incorporate SMS, meaning you have to convince friends to download the same apps if you want to use them. iMessage, however, does all this natively.

And while Apple recently agreed to support RCS to make cross-platform messaging better, the DOJ isn’t buying it. It notes that Apple not only hasn’t adopted it yet but that third-party apps would still be “prohibited from incorporating RCS just as they are prohibited from incorporating SMS.” The DOJ also takes issue with the fact that Apple only agreed to adopt a 2019 version of RCS. Unless Apple agrees to support future versions, it argues “RCS could soon be broken on iPhones anyway.”

Did the Justice Department find the most technology-illiterate, incompetent, stupid lawyers in the United States to file this lawsuit against the world’s largest corporation? “For example, you have to dive into permissions” to “access the iPhone’s camera for video calls.” I trust Song’s reporting — I know she isn’t editorializing here and this is purely how the Justice Department’s lawsuit is written. This is such a ridiculous argument — so ridiculous that I truly don’t even know where to begin to refute it.

Requiring permissions, in the words of the Justice Department, is “abusing monopoly power?” It is one dialog box that a user is faced with once to protect their privacy. It does not result in a single penny for Apple, no matter what the user selects. If the Justice Department aims to remove permission prompts — prompts that I have not heard a single American ever complain about — it is an absolute disgrace to this country.

And the Justice Department, for some reason, “isn’t buying” Apple’s adoption of Rich Communication Services because it has opted to adopt a 2019 version. That is just incorrect — the last published version of the standard by the body that controls it was published in 2019. The “latest” version, according to the Justice Department, is the one Google published. And Google participates in the duopoly. What is the Justice Department’s goal here, to push Apple to adopt Google’s standards just to sue Google for the same thing? It’s technology illiteracy at its finest.

Song continues:

While the Apple Watch can maintain a connection if a user accidentally turns off Bluetooth on the iPhone, third-party watches can’t. As with third-party messaging apps, users have to dive into separate permissions to turn on background app refresh and turn off low power mode if they want the most stable and consistent Bluetooth connection. This impacts passive updates, like weather or exercise tracking.

Does Google allow Wear OS smartwatches to connect with iOS devices in the first place? Continuing:

With digital wallets, the DOJ’s beef with Apple is that the company blocks financial institutions from accessing NFC hardware within the iPhone. (Though, Apple will begin allowing access in much of Europe because of new regulations in the EU.) That, in turn, limits them from providing tap-to-pay capabilities and, again, funnels iPhone users into Apple Pay and Apple Wallet.

Doing so means banks also have to pay 0.15 percent for each credit card transaction done through Apple Pay. Conversely, it’s free for banks using Samsung or Google’s payment apps. The result is that Apple got nearly $200 billion in US transactions in 2022, according to a US Consumer Financial Protection Bureau report. The same agency estimates that digital wallet tap-to-pay transactions will increase by over 150 percent by 2028.

Apple is not the Mint; it has no obligation to let anyone use its technology to make near-field communication payments. It does not funnel “iPhone users into Apple Pay and Apple Wallet” because these users can continue to pay for things with regular currency. Using Apple Pay is a feature — a selling point — of the iPhone. Apparently selling products with features is against the law according to this brain-dead Justice Department. President Biden should fire Attorney General Merrick Garland, a failure of an attorney general who hasn’t even been able to prosecute a rapist for stealing confidential government secrets, then lying to the government about those secrets.