Google Search Summaries Tell People to Eat Glue

Jason Koebler, reporting for 404 Media:

The complete destruction of Google Search via forced AI adoption and the carnage it is wreaking on the internet is deeply depressing, but there are bright spots. For example, as the prophecy foretold, we are learning exactly what Google is paying Reddit $60 million annually for. And that is to confidently serve its customers ideas like, to make cheese stick on a pizza, “you can also add about 1/8 cup of non-toxic glue” to pizza sauce, which comes directly from the mind of a Reddit user who calls themselves “Fucksmith” and posted about putting glue on pizza 11 years ago.

Here is what I wrote about Google’s artificial intelligence right after the company’s I/O conference earlier in May:

The summaries are also prone to making mistakes and fabricating information, even though they’re placed front-and-center in the usually reliable Google Search interface. This is extremely dangerous: Google users are accustomed to reliable, correct answers appearing in Google Search and might not be able to distinguish between the new AI-generated summaries and the old content snippets, which remain below the Gemini blurb. No matter how many disclaimers Google adds, I think it is still too early to add this feature to a product used by billions. I am not entirely pessimistic about the concept of AI summaries in search — I actually think this is the best use case for generative artificial intelligence — but in its current state, it is best to leave this as a beta feature for savvy or curious users to enable for themselves.

Google in a statement to The Verge claimed that these incidents are simply squabbles for nothing and that they are isolated and appear only in results for uncommon queries. (Sundar Pichai, Google’s chief executive, also said the same in an interview with Nilay Patel, The Verge’s editor in chief, although in a slightly backhanded way.) Meghann Farnsworth, a spokesperson for Google, said the company believes the mistakes come from “generally very uncommon queries” when time and time again that theory has been proven false. Generative artificial intelligence is prone to making mistakes due to the way that large language models — the technology that powers generative AI — are made. Google knows it cannot solve that problem singlehandedly without further research, so it labels AI-generated blurbs at the top of Google search results as “experimental.”

Google’s mission when it announced that it would be bringing AI search summaries to all U.S. users by the end of the year was not to improve search for anyone — it was to motion to shareholders that the company’s AI prowess hasn’t been diminished by OpenAI, its chief rival. All press might be good press, but I truly don’t think this many incidents of Google’s AI flubbing the most basic of tests is very good for the company’s image. Google is known for being reputable and trustworthy, and it has shattered that reputation it so painstakingly created for itself in just a matter of weeks. The public’s perception of Google, and in particular, Google Search, has already been in a steady decline for the past few years, and the findings of people from all over the internet over the past week have further regimented the idea that Google’s main product is no longer as useful or capable as it once was.

These are not isolated incidents, and whenever representatives for Google have been confronted with that fact, they have never once tried to digest it and make improvements, as any sane, fast-moving company with a clear and effective hierarchical organizational structure would. Google does not have effective leadership — proven by Pichai’s nonsensical answer to Patel — so it is instead effectively deflecting the blame and chastising the users for typing in “uncommon queries.” Google itself has boasted about how thousands of new, never-seen-before queries are typed into Google each day, but now it is unable to effectively manage its star, most popular product like how it did once upon a time. Google Search is not dying — Bing and DuckDuckGo had an outage on Thursday and hardly anyone noticed — but it is suffering from incompetent leadership.

For now, Google needs to take the financial and perhaps emotional hit and pull search summaries from the public’s view, because recommending people eat glue is beyond ridiculous. And I think the company needs a fundamental reworking of its organizational structure to address fundamental setbacks and issues that are preventing employees from voicing their concerns. The most employees have been able to do is add a “Web” filter to Google Search for users to be able to view just blue links with no AI cruft. There is no more quality control at Google — just like a Silicon Valley start-up — and there is also no fast-paced innovation, unlike a Silicon Valley start-up. Google is now borrowing the worst limitations from small companies and combining them with the operational headaches of running a large multinational corporation. That can only be attributed to ineffective leadership.

Microsoft Announces ‘Copilot Plus’ PCs

Umar Shakir, reporting for The Verge:

Microsoft brought Windows, AI, and Arm processors together at a Surface event on May 20th…

The big news of the day was Microsoft’s new class of PCs, dubbed Copilot Plus PCs. These computers have processors with NPUs built in so they can do more AI-oriented tasks directly on the computer instead of the cloud. The AI-oriented tasks include using a new Windows feature called Recall.

Microsoft also announced a new Surface Laptop and Surface Pro Tablet powered by Qualcomm’s Snapdragon X processors. That means they should be thinner, lighter, and have better battery-life while also handling AI and processor heavy tasks. And Microsoft wasn’t the only one at the event showing off new laptops. HP, Asus, Lenovo, Dell, and other laptop makers all have new Copilot Plus PCs.

An important thing to note is that “Copilot Plus” is not a new software feature — it’s the brand name for Microsoft’s new line of computers, many of which aren’t even made by Microsoft itself through its Surface line of products, either. “Copilot Plus” computers have specification requirements for RAM and neural processing units, or NPUs for short: 16 gigabytes of RAM, 256 GB of storage, and an NPU rated at 40 trillion operations per second to run the artificial intelligence features built into the latest version of Windows. These new AI features are called “Copilot,” a brand name that has been around for about a year. Here is Andrew Cunningham, reporting for Ars Technica:

At a minimum, systems will need 16GB of RAM and 256GB of storage, to accommodate both the memory requirements and the on-disk storage requirements needed for things like large language models (LLMs; even so-called “small language models” like Microsoft’s Phi-3, still use several billion parameters). Microsoft says that all of the Snapdragon X Plus and Elite-powered PCs being announced today will come with the Copilot+ features pre-installed, and that they’ll begin shipping on June 18th.

But the biggest new requirement, and the blocker for virtually every Windows PC in use today, will be for an integrated neural processing unit, or NPU. Microsoft requires an NPU with performance rated at 40 trillion operations per second (TOPS), a high-level performance figure that Microsoft, Qualcomm, Apple, and others use for NPU performance comparisons. Right now, that requirement can only be met by a single chip in the Windows PC ecosystem, one that isn’t even quite available yet: Qualcomm’s Snapdragon X Elite and X Plus, launching in the new Surface and a number of PCs from the likes of Dell, Lenovo, HP, Asus, Acer, and other major PC OEMs in the next couple of months. All of those chips have NPUs capable of 45 TOPS, just a shade more than Microsoft’s minimum requirement.

These new requirements, as Cunningham writes, essentially exclude most computers with processors made by Intel and Advanced Micro Devices built on the x86 platform. Microsoft and its partners are instead relying on Qualcomm’s Snapdragon Arm-based processors, which have capable NPUs and are more battery-efficient for laptops, to power their latest Copilot Plus computers. Microsoft’s two Arm-based machines, the Surface Laptop and Surface Pro Tablet, run up to 58 percent faster than Apple’s newly-released M3 MacBook Air, says Microsoft, though it didn’t provide more specifications on how it measured the performance of the Qualcomm chips. I don’t believe in the company’s numbers, especially since it says the new Surface machines have better battery life than the MacBook Air, which would truly be a feat.

The new processors and specifications power new Copilot features in Windows, which will be coming to Windows 11 — not a new version called Windows 12, unlike some have speculated — in June. Some of the features run on-device to protect privacy, while others run on Microsoft’s Azure servers just like they did before. Microsoft announced that it would be deploying access to GPT-4o, its partner OpenAI’s latest large language model announced earlier in May, as part of the normal version of Copilot later this year, and it also announced new image generation features in certain apps. The new version of Windows, which includes an x86-to-Arm translator called Prism, has been designed for Arm chips, and Microsoft announced that it has collaborated with leading developers, such as Adobe, to bring Arm versions of popular apps to the new version of Windows. (Where have I heard that before?)

The biggest new software feature exclusive to the Copilot Plus PCs is called “Recall.” Here is Tom Warren, reporting for The Verge:

Microsoft’s launching Recall for Copilot Plus PCs, a new Windows 11 tool that keeps track of everything you see and do on your computer and, in return, gives you the ability to search and retrieve anything you’ve done on the device.

The scope of Recall, which Microsoft has internally called AI Explorer, is incredibly vast — it includes logging things you do in apps, tracking communications in live meetings, remembering all websites you’ve visited for research, and more. All you need to do is perform a “Recall” action, which is like an AI-powered search, and it’ll present a snapshot of that period of time that gives you context of the memory…

Microsoft is promising users that the Recall index remains local and private on-device. You can pause, stop, or delete captured content or choose to exclude specific apps or websites. Recall won’t take snapshots of InPrivate web browsing sessions in Microsoft Edge and DRM-protected content, either, says Microsoft, but it doesn’t “perform content moderation” and won’t actively hide sensitive information like passwords and financial account numbers.

What makes Recall special — other than that none of the data it captures is sent back to Microsoft’s servers, which would be both incredibly invasive and entirely predictable for Microsoft — is that it only captures screenshots periodically as work is being done on Windows. Users can go to the Recall section of Windows and simply type a query, using semantic natural language reasoning, to prompt an on-device LLM to search the library of automatically captured screenshots. The LLMs search text, videos, and images using multimodal functionality, and even transcribe spoken language using a new feature called “Live Captions,” also announced Monday.

Recall reminds me of Rewind, the Apple silicon-exclusive Mac app touted last year by a group of Silicon Valley entrepreneurs that continuously records one’s Mac screen to allow an LLM to search everything someone does on it. The app sparked privacy concerns because the processing was done in the cloud, not on-device, whereas Microsoft continuously stated that no screenshots leave the device. I think it’s neat, but I’m unsure of its practicality.

Live Captions also translates 44 various languages into English, whether the content is being played in Windows or using the microphones to listen to conversations. It also processes queries entirely on-device, using the NPUs. It also transcribes audio and video content from all apps, not just ones that support it — this means that content from every website and program will be able to receive automatic, mostly accurate subtitles. (This is something I hope Apple adds in iOS 18.)

I think Monday’s announcements are extremely intriguing, especially regarding the bombastic claims by Microsoft as to the new AI PCs’ battery life and performance, and I’m sure reviewers will thoroughly benchmark the new machines when they arrive in June. And the new Copilot features — while I’m still not a fan of the dedicated Copilot Key — also seem interesting, especially “Recall.” I can’t wait to see what people use it for.

Scarlett Johansson: OpenAI Hired a Soundalike Without My Permission

Jacob Kastrenakes, reporting for The Verge:

Scarlett Johansson says that OpenAI asked her to be the voice behind ChatGPT — but that when she declined, the company went ahead and created a voice that sounded just like her. In a statement shared to NPR, Johansson says that she has now been “forced to hire legal counsel” and has sent two letters to OpenAI inquiring how the soundalike ChatGPT voice, known as Sky, was made.

“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system,” Johansson writes. She says that Altman contacted her agent as recently as two days before the company first demoed the ChatGPT voice asking for her to reconsider.

Altman has made it clear that he admires Johansson’s work. He’s said that Her, which features Johansson as an AI voice assistant, is his favorite film; after the ChatGPT event last week, he posted the word “her,” seemingly in reference to the voice demo the company presented, which featured an assistant that sounded just like Johansson.

OpenAI said this morning that it was pulling the voice of Sky in order to address questions around “how we chose the voices in ChatGPT.” The Verge has reached out to OpenAI for comment.

Johansson says she was “shocked, angered and in disbelief” over how “eerily similar” the voice of Sky sounded to herself. OpenAI said the voice comes from an actor who they hired who is speaking in their normal speaking voice. The company declined to share the actor’s name, citing privacy concerns.

You can read Johansson’s letter here, and I encourage you to do so. Here is the story from her side:

  1. OpenAI asks Johansson to be the voice for ChatGPT. Johansson refuses, citing personal reasons.
  2. OpenAI goes out and hires another voice actor who sounds like her in September of last year. The company launches the voice later in the year.
  3. OpenAI launches a new model earlier in May that is more expressive, highlighting the similarities between the voice, “Sky,” and Johansson.

I have absolutely no idea what Altman, OpenAI’s chief executive, was thinking with this atrocious decision. It clearly shows the company’s lack of regard for copyright laws and exemplifies the need for strong protections for actors in the age of artificial intelligence. As if this sleazy maneuver wasn’t enough to keep under wraps, Altman went ahead and posted on the social media website X: “her” after the Monday “Spring Update” keynote, hosted by Mira Murati, the company’s chief technology officer. Did OpenAI seriously think Johansson, one of Hollywood’s most famous actresses, wouldn’t pursue legal action against this?

Altman could’ve claimed plausible deniability because he wasn’t directly involved in the hiring of the new voice actress, but by posting about the movie, in which Johansson stars, it links him to the chaos. And posting about the movie makes him look even worse from a moral standpoint; it’s almost like a “just because you didn’t agree doesn’t mean I can’t clone your voice” type of sinister thinking, but maybe that’s just me being cynical. Even if Altman didn’t post, I still would’ve believed that he was involved because of his affinity for the film and because the voice sounds so eerily similar to Johansson’s.

Johansson isn’t out to get OpenAI — I don’t even think she’s very upset — but she does want some transparency as to who it hired for the voice and they were chosen. (Clearly, because they sound like Johansson, though I find it difficult that OpenAI will willingly admit that.) I wish to know this information too, because in an age where deepfakes are so prevalent, transparency and openness are crucial. OpenAI, as the leader of the AI revolution, needs to take accountability for this and respect copyright laws.

And no, I hardly doubt this will alter Apple’s negotiations with OpenAI for iOS 18.

Slack Admits It’s Training LLMs on Private Messages

Will Shanklin, reporting for Engadget:

Slack trains machine-learning models on user messages, files, and other content without explicit permission. The training is opt-out, meaning your private data will be leeched by default. Making matters worse, you’ll have to ask your organization’s Slack admin (human resources, IT, etc.) to email the company to ask it to stop. (You can’t do it yourself.) Welcome to the dark side of the new AI training data gold rush.

Corey Quinn, an executive at DuckBill Group, spotted the policy in a blurb in Slack’s Privacy Principles and posted about it on X (via PCMag). The section reads (emphasis ours), “To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement.”

The opt-out process requires you to do all the work to protect your data. According to the privacy notice, “To opt out, please have your Org or Workspace Owners or Primary Owner contact our Customer Experience team at with your Workspace/Org URL and the subject line ‘Slack Global model opt-out request.’ We will process your request and respond once the opt out has been completed.”

This is horrifying. I’m usually not one to be all too worried about public writing being used for large language models, but private direct messages and conversations within restricted Slacks ought to be off-limits. Slack is covering up here by distinguishing between its official premium Slack LLMs — which cost money — and workspace-specific search tools, but there is no difference. They’re both artificial intelligence products, and they’re both trained on private, presumably encrypted-at-rest data. It is malpractice for Slack to hide this information in a document written by seasoned legal experts that no normal person will ever read, and the entire company should be ashamed of itself. Salesforce continues to pull nonsense like this on its customers for no reason other than maximum profit making, and it is shameful. If there were a better product than Slack in its market, the Slack division of Salesforce would go bankrupt.

What makes matters worse — yes, even worse than training LLMs on private messages — is that customers have no way of opting out unless they ask their Slack administrator to email the company’s Feedback address requesting to opt-out. There are two problems here: individual users can’t opt out of training their own data and administrators have to email the company to prevent their employees’ data from being harvested by Salesforce. How is this kind of behavior legal, especially in Europe? Some rather frustrated Slack users are demanding the company make the default behavior to opt into training rather than opt out, but I wouldn’t even go that far. Slack needs to build a toggle switch for every employee or Slack user to turn data sharing off for themselves — and it needs to do it fast. Anything shallow of that is beyond unacceptable. These are private messages, not public articles or social media posts.

I don’t know how anyone can justify this behavior. It’s sleazy, rude, disrespectful, and probably violating some European privacy regulations. People have been able to trick LLMs into leaking their training data with relative ease and that is not something Salesforce/Slack can mitigate with a couple of lines of code because the flaw is inherent to the design of the models. This bogus statement from Slack’s social media public relations department was written by someone who is absolutely clueless about how these models work and how data can be extracted from them, and that, plainly, is wrong. Private user data should never be used to train any AI model whatsoever, regardless of who can use it or access it. The training, if it happens, should only be constrained to on-device machine learning, like Apple Photos, for example. And moreover, burying the information about data scraping in a few lines in a privacy policy not a single customer will read is irresponsible. Shame on Salesforce, and shame on Slack.

Google Plays Catch-Up to OpenAI at This Year’s I/O

Google threw things at the wall — now, it hopes some will stick

An image of Sundar Pichai, Google’s chief executive, onstage at Google I/O 2024. Sundar Pichai, Google’s chief executive, onstage at Google I/O 2024. Image: Google.

At the opening keynote of its I/O developer conference on Tuesday, Google employed a strategy born of sheer desperation: Throw things at the wall and see what sticks. The company, famed for leading the artificial intelligence revolution within Silicon Valley for years, has been overtaken by none other than a scrappy neighbor with some help from Microsoft, one of its most notable archrivals. That neighbor, OpenAI, stunned the world just a day prior on Monday with the announcement of a new omni-modal large language model, GPT-4o, which features a remarkably capable and humanlike text-to-speech apparatus and state-of-the-art visual recognition technology. OpenAI first took the world by storm in November 2022 with the launch of its chatbot, ChatGPT, which instantly became one of the fastest-growing consumer technology products ever. From there, it has only been smooth sailing for the company, and everyone else has been trying to catch up — including Google.

In a hurry, Google quickly went into overdrive, declaring a “code red” and putting all hands on deck after Microsoft announced a new partnership with OpenAI to bring the new generative pre-trained transformer technology to Bing. Last year, Google announced Bard, its AI chatbot meant to rival OpenAI, only for OpenAI’s latest GPT-4 to run laps around it. Bard would consistently flub answers through hallucinations — phenomena where chatbots confidently provide wrong answers unknowingly due to a quirk in their design — fail to provide references, and ignore commands, placing it dead last in the rankings against its rivals. At Google’s I/O conference last year, Google began trying to add the model hurriedly to its existing Google Workspace products, like Google Docs and Gmail, but most users didn’t find it very useful due to its constant mistakes.

Later in the year, Google announced three new models to better compete with OpenAI: Gemini Nano, Gemini Pro, and Gemini Ultra1. The three models — each with varying parameter and context token sizes — were poised to perform different tasks each, but Google quickly touted how Gemini Pro was comparable to GPT-3.5 and Gemini Ultra even beat GPT-4 in some circumstances. It put out a demonstration showcasing the multimodal features of Gemini Ultra, showed off Gemini Pro’s deep interaction with Google products like YouTube and Google Search, and pre-installed the smaller Gemini Nano model onto Pixel phones in the fall to perform quick on-device tasks. And most importantly of all, to change Bard’s brand reputation, Google changed the name of its AI product and chatbot to Gemini. Eventually, it attempted to put Gemini everywhere: in Google Assistant, in Google Search by way of Search Generative Experience, and in its own app and website. It was a fragmented mess — while the models were average at best, there were too many of them in too many places. They cluttered Google’s already complex ecosystem of products.

So, with the stage set, expectations were high for Tuesday’s I/O event, where Google was poised to clean up the clutter and consolidate the AI mess it had entangled for itself so hastily over the last 16 months. And, in typical Google fashion, the company utterly flopped. Instead, Google leaned in on the mess, throwing Gemini into every Google product imaginable. Google Search now has Gemini built-in for content summaries, replacing SGE for all U.S. users beginning this fall; Gmail now has Gemini search and summaries to shorten threads, find old emails, and draft responses; Android now has a contextually aware version of Gemini which can be asked questions depending on user selections; and every nook and cranny of Google’s services has been dusted with the illustrious sparkles of AI in some capacity. I tried to make some sense out of the muddied features, and here is what I believe Google’s current master plan is:

  1. Let developers toy with Gemini however they would like, lowering prices for the Gemini application programming interface and making new open-source LLMs to lead the way in the development and production of AI-focused third-party applications.

  2. Bring Gemini to every consumer product for free to increase user engagement and deliver shareholder value to please Wall Street.

  3. Unveil new moonshot projects to excite people and sell them on the prospect of AI.

I came up with this thesis after closely observing Google’s announcements on Tuesday, and I think it makes sense from an organizational, business perspective. In practice, however, it just looks desperate. Tuesday was catch-up day for Google — the company did not announce anything genuinely revolutionary or never seen before but rather focused its efforts on reclaiming its top spot in the AI space. Whether the strategy will yield a positive result is to be determined. In the meantime, though, consumers are left with boring, uninteresting, unexciting events that mainly function as shareholder advertisements instead of places to showcase new technology. Google I/O was such an event, with its steam stolen by OpenAI’s presentation just the day prior — and that is entirely the fault of Google, not OpenAI. Here are my takeaways from the keynote this year.

Gemini for the Web

Since the advent of ChatGPT, AI chatbots and their makers have been intent on upending the norms of the web. Publishers have reported frustration due to decreased traffic, users are inundated with cheap AI-generated spam whenever they make a Google search, and it is even harder than ever to ensure answers’ accuracy. Google, without a doubt, bears some responsibility for this after its beta introduction of SGE last year, which automatically queries the web and then quickly writes a summary pinned to the top of the results page. And even before that, Gemini was engineered to search the web to generate its answers, providing citations in line for users to fact-check its responses.

In practice, though, the citations and links to other websites are minuscule and are rarely clicked because most of the time, they’re simply unneeded. Instead of taking steps to address this information conundrum that has plagued the web for over a year, Google leaned into it at I/O this year — both in Google Search and Gemini, the chatbot.

First, Gemini: Gemini had fallen behind in sheer number of features compared to OpenAI’s GPT-4, so Google announced some remedies to better compete in the saturated chatbot market. The company announced it would build a conversation two-way voice mode into Gemini — both the web version and mobile app — similar to OpenAI’s announcements from Monday, allowing users to speak to the robot directly and receive speedy answers. It said the feature, which will become available later this year, will be conversational unlike Google Assistant, which currently only speaks aloud answers to user queries without asking follow-up questions.

However, it is unclear how this differs compared to the Gemini Google Assistant mode available for Pixel users now. Google Assistant on Pixel phones has two modes: the standard Google Assistant mode and Gemini, which uses the chatbot to generate answers. Moreover, there is already feature parity between the Gemini app and Google Assistant on Android, further muddling feature sets between Google’s AI products. This is what I mean by Gemini coming to every nook and cranny of Google’s software. Google needs to clean up this product line.

The new version of Gemini will also allow users to create custom, task-specific mini chatbots called “Gems,” a clever play on “Gemini.” This feature is meant to rival OpenAI’s “GPTs,” customizable GPT-4-powered chatbots that can be individually given instructions to perform a specific task. For example, a GPT can be programmed to search for grammar mistakes whenever a user uploads a file — that way, there is no need to describe what to do with every file that is uploaded on the user’s end as someone would have to do with the normal version of ChatGPT. Gems are a one-to-one knockoff of GPTs — users can make their own Gems and program them to perform specific tasks beforehand. Gems will be able to access the web, potentially becoming useful research tools, and they will also have multimodal functionality for paying Gemini Advanced users, allowing image and video uploads. Google says Gems will be available sometime in the summer for all users in the Gemini app on Android, Google app on iOS, and on the web.

And then, there is Google Search: Since the winter, Google has been slowly rolling out its SGE summaries to all web users on Google. The summaries appear with an “Experimental” badge and big, bold answers, and typically generate a second or two after the search has been made. The company now has fully renamed the experimental feature to “search summaries,” removing the feature from beta testing (it was only available through Google’s “Labs” portal) and vowing to expand it to all U.S. users by the end of the year. The change has the potential to entirely rewrite the internet, killing traffic to publishers that rely on Google Search to survive and sell advertisements on their pages, as well as disincentivizing high-quality handwritten answers on the web. The Gemini-powered search summaries do provide sources, but they are often buried below the summary and seldom clicked on by users, who are commonly content with the short AI-generated blurb.

The summaries are also prone to making mistakes and fabricating information, even though they’re placed front-and-center in the usually reliable Google Search interface. This is extremely dangerous: Google users are accustomed to reliable, correct answers appearing in Google Search and might not be able to distinguish between the new AI-generated summaries and the old content snippets, which remain below the Gemini blurb. No matter how many disclaimers Google adds, I think it is still too early to add this feature to a product used by billions. I am not entirely pessimistic about the concept of AI summaries in search — I actually think this is the best use case for generative artificial intelligence — but in its current state, it is best to leave this as a beta feature for savvy or curious users to enable for themselves. The expansion and improvement of the summaries were a marquee feature of Tuesday’s presentation, taking up a decent chunk of the address, and yet Google made an egregious error in its promotional video for the product, as spotted by Nilay Patel, the editor in chief of The Verge. That says a lot.

Google improved its summaries feature before beginning the mass rollout, though: it touted what it called “multi-step reasoning,” allowing Google Search to essentially function as the Gemini chatbot itself so users can enter multiple questions at once into the search bar. Most Google searches aren’t typically conversational; most people perform several searches in a row to fully learn something. This practice, as Casey Newton wrote for Platformer, once upon a time, used to be enjoyable. Finding an answer, repeating the search with more information, and clicking another one of the 10 blue links is a ritual practiced by hundreds of millions of people daily, and Google seems intent on destroying it.

Why the company has decided to upend its core search product is obvious: Google Search is bad now. Nowadays, Google recommends AI-generated pages engineered for maximum clicks and advertising revenue rather than useful, human-written sites, leading users to append “Reddit” or “Twitter” to their queries to find real answers written by real people. Google has tacitly shown that it has no interest in fixing the core problem at hand — instead, it is just closing up shop and redirecting users to an inferior product.

Google’s objective at I/O was to circumvent the problem of the internet no longer being helpful by making AI perform searches automatically. Google showcased queries that notably included the word “and” in them — for example: “What is the best Pilates studio in Boston and how long would it take to walk there from Bacon Hill?” Before Tuesday, one would have to split that question into two: “What is the best Pilates studio in Boston?” and “Travel time between the studio and home.” (The latter would probably be a Google Maps search.)

It is a highly specific yet somehow absolutely relevant example of Google throwing in the towel on web search. When Google detects a multi-step query, it does not present 10 blue links that might have the answer to both questions, because that would be all but impossible. (Very few websites would have such specific information.) It instead generates an AI summary of information pulled from all over the web — including from Google Maps — effectively negating the need to do further research. While this might sound positive, it in reality kills the usefulness of the internet by relegating the task of searching for information to a robot.

People will learn less from this technology, they will enjoy using the internet less, and as a result, publishers will be less incentivized to add to the corpus of information Gemini uses to provide answers. The new AI features are good short-term solutions to improve the usefulness of the world’s information superhighway, but they cause a major chicken-and-egg problem that Google has continuously either ignored or chosen to purposefully neglect. This pressing issue does not fit well in the quick pace of a presentation, but it will cause an already noticeable decline in high-quality information on the web. It is a short-term bandage over the wound that is lazy, money-hungry analytics firms — once the bandage withers and expires, the wound will still be there.

That is not to say that Google should not invest in AI at all, because AI pessimism is a conservative, cowardly ideology not rooted in fact. Instead, Google should use AI to remedy the major problem at hand, which it caused itself. AI can be used to find good information, improve recommendation algorithms, and help users find answers to their questions in fewer words. Google is more than capable of taking a thoughtful approach to this glitch in the information ecosystem, and that is apparent because of its latest enhancement to its traditional search product: ask with video and Circle to Search.

Asking questions with video is exactly the type of enhancement AI can bring without uprooting the vast library of information on the web. The new search feature is built into Google Lens but utilizes Google’s multimodal generative AI to analyze video clips recorded through the Google mobile app along with a quick voice prompt. When a recording is initiated, the app asks users to describe a problem, such as why a pictured record player isn’t working. It then uses AI to understand the prompt and video, then generate an answer with sources pulled from the web.

The reason this is more groundbreaking than worrisome is because it (a) enables people to learn more than they would otherwise, (b) adds a qualitative improvement to the user experience, and (c) encourages authors to contribute information to be featured as one of the sources for the explanation. It is just enough of a change to the habits of the internet where the result is a net positive. Google is doing more than simply performing Google searches by itself, then paraphrasing the answers — it is understanding a query using a neural network, gathering sources, and then explaining them while also providing credit. In other words, it isn’t a summary; it’s a new, remarkable piece of work.

It is safe to say that for now, I am pessimistic about Google’s rethinking of the web. Google’s chatbots consistently provide incorrect answers to prompts, the summaries’ placement alongside the 10 blue links — which aren’t even 10 blue links anymore — can be confusing to non-savvy users, and the new features feel more like ignorant, soulless bets on an illustrious “new internet” rather than true innovations that will improve people’s lives. But that isn’t to say there is no future for generative AI in search — there is in myriad ways. But the sheer unwillingness on Google’s end to truly embrace generative AI’s quirks is astonishing.

Gemini for Users

Google’s apparent attempt to reinvent the internet does not just stop at the web — it also extends to its personal services, like Google Photos and Gmail. This extension first took place last year at Google I/O, and many of Tuesday’s announcements seemed like déjà vu, but this year the company seemed more intent on utilizing the multimodal capabilities and larger context lengths of its latest LLMs to improve search capabilities and provide better summaries, an advantage it hadn’t developed last May.

First, Google Photos, which the company opened the event with, surprisingly. Google described a limitation of basic optical character recognition-based search: Say someone wanted to find their license plate number in a sea of images of various cars and other vehicles. Previously, they would have to sift through the photos until they found one of their car, but with multimodal AI, Gemini can locate the photos of one’s car automatically, and then display the license plate number in a cropped format. This enhanced, contextual search functions like a chatbot within Google Photos to make searching and categorizing photos easier. The AI, which uses Gemini under the hood, uses data from a user’s photo library, such as facial recognition data and geolocation, to find photos that might fit specific parameters or a theme. (One of the examples shown onstage was a user asking for photos of their daughter growing up.)

In Gmail, Google announced new email summarization features to “catch up” on threads via Gemini-written synopses. Additionally, the search bar in Gmail will allow users to sift through messages from a particular sender to find specified bits of information, such as a date for an event or a deadline for a task, without having to enumerate each email individually. The new features — while not improving the traditional Gmail search experience used to find attachments and sort between categories like the sender and send date — do fill the role of a personal assistant in many ways. And they’re also present in the Gemini chatbot interface, so users can ask Gemini to fetch emails about a given subject in the middle of a pre-existing chat conversation. Google said the new features would roll out to all users beginning Tuesday.

The new additions are reminiscent of Microsoft’s Outlook / Microsoft 365 features first debuted last year, and I surmise that is the point. Google’s flagship Gmail service had next to zero AI features, whereas now it can summarize emails and write drafts for new ones, all inline. However, these new Gemini-powered AI features create an interesting paradox I outlined last year: Users will send emails using AI only for the receiver to summarize them using AI and draft responses synthetically, which the sender will receive and summarize using AI. It is an endless, unnecessary cycle that exists due to the quirks of human communication. I do not think this is the fault of Google — it’s just interesting to see why these tools were developed in the first place and to observe how they might be used in the real world.

My favorite addition, however, is what settles the AI hardware debate that has become a hot topic of debate in recent weeks: Gemini in Circle to Search. Circle to Search — first announced earlier this year — allows users to capture a screenshot of sorts, then circle a subject for Google Lens to analyze. Now, Circle to Search adds the multimodal version of Gemini, Gemini Ultra, as well as Gemini Nano, which runs locally on Pixel phones for smaller, more lightweight queries. This one, simple-on-paper addition to Circle to Search, an already non-sophisticated feature, nearly kills both the Rabbit R1 and Humane Ai Pin. With just a simple swipe gesture, any object — physical or virtual — can be analyzed and researched by an intelligent, capable LLM. It’s novel, inventive, and eliminates the often substantial barrier between trying to understand something in the spur of the moment and accessing information.” It makes the process of searching simple, which is exactly Google’s mission statement.

Circle to Search does not summarize the web in the way other Gemini features do because it is mostly powered by a lightweight model with a smaller context window that runs on-device. Instead, it falls back to the web in most instances, but what it does do is perform the task of writing the Google search. Instead of having to enter into Google a query like “orange box with AI designed by Teenage Engineering,” a simple screenshot can automatically write that search and present links to the Rabbit R1. It is a perfect, elegant, amazing implementation of AI now supercharged by an LLM. Google says this type of searching is context-aware, which is a crucial tenant of useful information gathering because there is no use to information if it is not contextual. On Google, that awareness must be manually entered or inferred, but with Circle to Search, the system knows precisely what is happening on a user’s screen.

This might sound like the standard Google Lens, but it is much more advanced than that. It can summarize text, explain a topic, or use existing user data, such as calendar events or notes, to personalize its responses. And because it has the advantage of context awareness, it can be more personal, succinct, and knowledgeable — exactly what the AI devices from Rabbit and Humane lack. Circle to Search with Gemini is built into the most important technological device, and it is exactly the best use for AI. Yes, it might reduce the number of Google searches typed in, upsetting publishers, but it makes using computers more intuitive and personal. Google should run with Circle to Search — it is a winner.

Circle to Search is also powered by a new LLM Google announced during its presentation2, called LearnLM, designed for educational settings and based on Gemini. LearnLM was demonstrated with a Circle to Search query where some algebra homework was presented — the chatbot was able to explain the answer thoroughly using the correct typography and notation, too. Presenters also described the LLM as available on Google Classroom, Google’s learning management software, and YouTube, to explain “educational videos.” The YouTube chatbot interface, which was first beta tested amongst select YouTube Premium subscribers last year, will be available more broadly and will enable users to ask questions about certain videos and find comments more easily. It is unclear what the difference is between LearnLM and Gemini exactly, but I assume LearnLM has less, more specific training data to address hallucinations.

Here are some miscellaneous additions also announced Tuesday:

  • NotebookLM, Google’s LLM-powered research tool that users can upload custom training data to, now uses Gemini to provide responses. The tool is mainly used to study for tests or better understand notes; it was first released to the general public last year. The most noteworthy addition, however, was the new conversation mode, which simulates two virtual characters having a faux conversation about a topic using the user-provided training data. Users can then interject with a question of their own by clicking a button, which pauses the “conversation” — when a question is asked, the computer-generated voices answer it within the context of the training data.

  • On-device AI, powered by Gemini Nano, will now alert users when a phone call might be a scam. This feature will, without a doubt, be helpful for seniors and the less technically inclined. Gemini will listen to calls — even ones it doesn’t automatically flag as spam — and show an alert if it detects it might be malicious.

Google, for years, has excelled at making the smartest smartphones, and this year is no exception. While the company’s web AI features have left me frustrated and skeptical, the user-end features are much more Google-like, adding delight and usefulness while also putting to rest AI grifts with no value. Many of these features might be Android-exclusive, but that makes me even more excited for the Worldwide Developers Conference when Apple is rumored to announce similar enhancements and additions to iOS. The on-device AI feature announcements at Google I/O this year were the only times I felt somewhat excited about what Google had to announce Tuesday, though it might have also helped that those features were revealed toward the beginning of the keynote.

Gemini for Investors

Project Astra is Google’s name for Silicon Valley’s next AI grift. By itself, the technology is quite impressive in the same way that Monday’s OpenAI event was: a presenter showcased how Project Astra could, in real-time, identify objects it looked at via a smartphone camera, then answer questions about them. It was able to read text from a whiteboard, identify Schrödinger’s cat, and name a place just from looking outside a window. It’s a real-time, multimodal AI apparatus, just like OpenAI’s, but there is only one problem: we don’t know if it will ever exist.

Google has a history of announcing products to do nothing more than hike its stock price, like Google Duplex, a voice-to-text AI model that was poised to be able to make calls to secure reservations or perform other mundane tasks with a simple text prompt. Project Astra feels exactly like one of those products because of how vague the demonstration was: The company did not provide a release date, more details on what it may be able to do, or even what LLMs it might be powered by. (It doesn’t even have a proper name.) All the audience received on a sunny spring morning in Mountain View, California, was a video of a smartphone, and later some smart glasses, identifying physical objects while answering questions in an eccentric voice.

The world had already received that video just a day prior, except that time, it received a release date too. And that is a perfect place to circle back to the original point I made at the very beginning of this article: OpenAI stole Google’s thunder, ate its lunch, took its money, and got all the fame. That was not OpenAI’s fault — it was Google’s fault for failing to predict the artificial intelligence revolution. For being so disorganized and unmotivated, for having such an incompetent leader, for being unfocused, and for not realizing the potential of its own employees. Google failed, and now the company is in overdrive mode, throwing everything at the wall and seeing what sticks. Tuesday’s event was the final show — it’s summit or bust.

More than to please users, Tuesday’s Google I/O served the purpose of pleasing investors. It was painfully evident in every scene how uninspired and apathetic the presenters were. None of them had any ambition or excitement to present their work — they were just there because they had to. And they were right: Google had to be there on Tuesday, lest its tenure as the leader of AI come to an end. I’d argue that has already happened — Microsoft and OpenAI have already won, and the only way for Google to make a comeback is by fixing itself first. Put on your oxygen mask before helping others; address your pitfalls before running the marathon.

Google desperately needs a new chief executive, new leadership, and some new life. Mountain View is aimless, and for now, hopeless. The mud is not sticking, Google.

  1. Gemini Nano, Gemini Pro, and Gemini Ultra are Google’s last-generation models. Gemini 1.5 Pro is the latest, and performs equally to Gemini Ultra, though without multimodal capability. Google also announced Gemini Flash on Tuesday, which is smaller than Gemini Nano. It is unclear if Gemini Flash is built on the 1.5 architecture or the 1.0 one. ↩︎

  2. Here is a handy list of Google’s current LLMs. ↩︎

OpenAI Launches ChatGPT 4o, New Voice Mode, and Mac App

OpenAI on Monday announced a slew of new additions to ChatGPT, its artificial intelligence chatbot, in a “Spring Update” event streamed in front of a live audience of employees in its San Francisco office. Mira Murati, the company’s chief technology officer, led the announcements alongside some engineers who worked on their development while Sam Altman, OpenAI’s chief executive, live-posted from the audience on the social media website X. I highly recommend watching the entire presentation, as it is truly one of the most mind-blowing demonstrations one will ever see. It is just 26 minutes long and is available for free on OpenAI’s website. But here is the rundown of the main announcements:

  1. A new large language model, called GPT-4o, with “O” standing for “omni.” It is significantly speedier at producing responses than GPT-4 while being as intelligent as the older version of the generative pre-trained transformer.
  2. A new, improved voice mode that integrates a live camera so ChatGPT can see and speak concurrently. Users can interrupt the robot while it speaks, and the model acts more expressively, tuning its responses to the user’s emotions.
  3. A native ChatGPT application for macOS with which users can ask the chatbot questions with a keyboard shortcut, share their screen for questions, and ask ChatGPT about clipboard contents.

Again, the video presentation is compulsory viewing, even for the less technically inclined. No written summary will be able to describe the emotional rush felt while watching a robot act like a human being. The most compelling portion of the demonstration was when the two engineers spoke to the chatbot on an iPhone, through the app, and watched it rattle off eloquent, human-like responses to questions asked naturally. It really is something to behold.

However, something stuck out to me throughout the banter between the humans and the chatbot: the expressiveness. Virtual assistants, no matter how good their text-to-speech capabilities may be, still speak like inanimate non-player characters, in a way. Their responses are tailored specifically to questions posed by the users, but they still sound pre-written and artificial due to the way they speak. Humans use filler words, like “um,” “uh,” and “like” frequently; they take long pauses to finish thoughts before speaking them aloud; and they read and speak expressively, with each word sounding different each time. Emphasis might be placed on different parts of the word, it might be said at different speeds — the point is, humans do not speak perfectly. They speak like humans.

The new voice mode version of ChatGPT, ChatGPT 4o, speaks just like a real person would. It laughs, it takes pauses, it places emphasis on different parts of words and sentences, and it speaks loosely. It acts more like a compassionate friend than a professional assistant — it does not aim to be formal in any way, but it also tries to maintain some degree of clarity. For example, it won’t meander like a person may, but it does sound like it may meander. For example, when the chatbot viewed a piece of paper with the words “I ♥ ChatGPT,” it responded oddly carefully: “Oh, stop it, you’re making me blush!” Aside from the fact that robots cannot blush, the way it said “oh” and the space that came after it had the same expression and emotion that it would carry if a human had said it. The chatbot sounded surprised, befuddled, and flustered, even though it had prepared that response after solving essentially what was just a tough algebra problem.

Other instances, however, seemed pretty awkward: ChatGPT seemed very talkative in the demonstration, such as when the presenters made mistakes or asked the robot to wait a second. Instead of simply replying “Sure” or just firing back with an “mhmm” as a person would, it gave an annoyingly verbose answer: “Sure, I’d love to see it whenever you’re ready!” No person would speak like that unless they were trying to be extra flattering or appear overly attentive. It could be that ChatGPT’s makers programmed the robot to perform this way for the presentation just so that the audience could hear more of the Scarlett Johansson-esque voice straight from the movie “Her,” but the constant talkativeness broke the immersion and made me want to frankly tell it to quiet down a bit.

The robot also seemed oddly witty, as if it carried some sass in its responses. It wasn’t rude, of course, but it sounded like a very confident salesperson when it should’ve been more subdued. It liked to use words like “Whoops!” and added some small humor to its responses — again, signs of wordiness. I assume the reason for this is to make the robot sound more humanlike because awkward silences are unpleasant and may lead users to think ChatGPT is processing information or not ready to receive a request. In fact, while in voice mode, it’s always processing information and ready to receive requests. It can be interrupted with no qualms, it can be asked different questions, and it can wait on standby for more information. Because GPT-4o is so quick at generating responses, there is zero latency between questions, which is jarring to adjust to but also mimics personal interactions.

Because ChatGPT has no facial expressions, it has to rely on sometimes annoying audio cues to keep the conversation flowing. That doesn’t mean ChatGPT can’t sense users’ emotions or feelings, though — the “O” in GPT-4o enables it to understand tacit intricacies in speech. It can also use the camera to detect facial expressions, but the more interesting use was what it could do with its virtual apparatus. Not only can users speak to ChatGPT while it is looking at something by way of its “omni-modal” capabilities, but users can share their computer screens and make selections on the fly to receive guidance from ChatGPT as if it were a friend looking over their shoulder. An intriguing demonstration was when the robot was able to guide a user through solving a math equation, identifying mistakes as they were made on the paper without any additional input. That was seriously impressive. Another example was with writing code: ChatGPT could look at some code in a document and describe what it did, then make modifications to it.

ChatGPT 4o’s underlying technology is still OpenAI’s flagship GPT-4 LLM, which is still available for paying customers — though I wouldn’t know why one would use it as it’s worse and has lower usage limits. But the new LLM is now trained on audio and visual data in addition to text. Previously, as Murati described during the event, ChatGPT would have to perform a dance of transcribing speech, describing images, processing the information like a normal LLM text query, and then finally running the answer through a text-to-speech model. GPT-4o performs all of those steps inherently as part of its processing pipeline. It natively supports multimodal input and processes it naturally without performing any modifications. It knows what objects are in real life, it knows how people speak, and it knows how to speak like them. It is truly advanced technology, and I can’t wait to use it when it launches “in the coming weeks.”

While the concept of a truly humanlike chatbot is still unsettling to me, I feel like we’ll all become accustomed to assistants such as the one OpenAI announced on Monday. And I also believe they’ll be more intertwined with our daily lives due to their deep integration with our current technology like iPhones and Macs, unlike AI-focused devices (grifts) like the ones from Humane and Rabbit. (The new Mac app is awesome.) It’s an exciting, amazing time for technology.

Good Riddance to that ‘Crush!’ Ad

Tim Nudd, reporting for Ad Age:

Apple apologized Thursday for a new iPad Pro commercial that was met with fierce criticism from creatives for depicting an array of creative tools and objects—from a piano, to a camera, to cans of paint—being destroyed by an industrial crusher.

The tech giant no longer plans to run the commercial on TV…

But many viewers had a more chilling interpretation, seeing the spot as a grim representation of technology crushing the history of human creativity—something the creative industry is already existentially worried about with the rise of AI.

In an exclusive statement obtained by Ad Age, Apple apologized for the “Crush” spot and said it didn’t mean to cause offense among its creative audience.

“Creativity is in our DNA at Apple, and it’s incredibly important to us to design products that empower creatives all over the world,” said Tor Myhren, the company’s VP of marketing communications. “Our goal is to always celebrate the myriad of ways users express themselves and bring their ideas to life through iPad. We missed the mark with this video, and we’re sorry.”

The spot rolled out on Apple’s YouTube and CEO Tim Cook’s X account on Tuesday, but had not received any paid media. Plans for a TV run have now been scrapped.

This is the video in question. Two things:

  1. This is the first time I have seen Apple pull an advertisement from the airwaves in recent memory. The backlash was fierce this time around, with many feeling frustrated and upset at the (terrible) visual of these beautiful pieces of technology and instruments being crushed by what looked like a hydraulic press. I understand what Apple was aiming for here — that the new iPad Pro is powerful enough to replace all of these tools while being remarkably thin — and in a way, the imagery fits the theme. But in practice, looking at the commercial is just sad. I understand why so many professionals — the target market for the advertisement, too — were disturbed by this video, and I think Apple made the right decision here. I appreciate how the company has handled this situation; it takes courage to remove the main commercial for a star product just a day after it was announced.

  2. When I first viewed the advertisement during Apple’s Tuesday event, I wasn’t very perturbed by it, but that was mostly because I wasn’t paying much attention. But after Cook posted the video on the social media website X, I watched it again after reading some posts from filmmakers and other creators about how it made them feel, and I was suddenly uneasy. This commercial comes at a time when much of the creative industry is alarmed by the advent of generative artificial intelligence. For their precious tools, like guitars, pianos, and paints, to be destroyed and replaced by a slender tablet marketed as an “AI-focused” device is cruel. I think Apple could’ve instead offered a brighter picture of how the new iPad Pro could be used, featuring creators in their working spaces using the iPad to enhance their workflows. Nobody is seriously going to throw out their drum kit to replace it with the AI-powered drummer in the new version of Logic Pro announced Tuesday, so why advertise the device like that?

Apple, in the words of Myhren, the company spokesperson, truly did “miss the mark.” It’s unusual coming from Cupertino, which typically makes the very best awe-inspiring advertisements. For example, I thought the digital campaign that followed the event comparing the new iPad Pro to a teal iPod nano was great — it is peak Apple; just as Steve Jobs would’ve intended. I know Apple values and loves physical, antique objects, even if they’re from another era — just look at how much the company celebrates its history in so many of its advertisements. I don’t know why the team tasked with producing this commercial chose to portray the new iPad Pro this way in a stunning deviation of decorum.

Thoughts on Apple’s ‘Let Loose’ Event

The thinnest, most powerful iPads take center stage

An artistic graphic made by Apple of a bunch of hand-drawn Apple logos, used as promotional material for the “Let loose” event. Let loose. Image: Apple.

Apple on Tuesday announced updates to its iPad lineup, including refreshed iPad Air and iPad Pro models, adding a new, larger size to the iPads Air and new screen technology and processors to both new iPads Pro. The company also announced new accessories, such as a new Apple Pencil Pro and Magic Keyboard for the iPads, as well as software updates to its Pro apps on iPadOS. The new announcements come at a time when Apple’s iPad lineup has remained stagnant — the company has not announced new tablets since October 2022, when the iPad Pro was last updated with the M2 chip. On Tuesday, Apple gave the iPad Air the M2 — an upgrade from the previous M1 from when it was last updated in 2022 — and the iPad Pro the M4, a new processor with more cores, a custom Display Engine, and enhanced Neural Engine for artificial intelligence tasks.

Most iPad announcements as of late aren’t particularly groundbreaking — more often than not, iPad refreshes typically feature marginal improvements to battery life and processors, and Apple usually resorts to rehashing old iPadOS feature announcements during its keynotes to fill the time. Tuesday’s event, however, was a notable exception: Apple packed the 38-minute virtual address chock full of feature enhancements to the high-end iPads, with Tim Cook, the company’s chief executive, calling Tuesday “the biggest day for iPad since its introduction” at the very beginning of the event. I tend to agree with that statement: The iPad Pro, for the first time ever, debuted with a new Mac Apple silicon processor before the Mac itself; it now features a “tandem” organic-LED display with two panels squished together to appear brighter; and it’s now thinner and lighter than ever before. These are not minor changes.

But, as I’ve said many times before, I think the biggest limitation to the iPad’s success is not the lack of killer hardware, but the lack of professional software that allows people to create and invent with the iPad. While Apple’s “magical sheet of glass” is now “impossibly thin” and more powerful than Cupertino’s lowest-end $1,600 MacBook Pro announced just last October, its software, iPadOS, continues to be worthless for anything more than basic computing tasks, like checking email or browsing the web. And while the new accessories, like the new Magic Keyboard made out of aluminum featuring a function row, are more professional and sturdy, they still don’t do anything to make the device more capable for professional users. Add to that the $200 price increase — the base-model 11-inch iPad Pro now starts at $1,000, while the larger 13-inch model starts at $1,300 — and the new high-end iPads feel disappointing. I don’t think the new iPads Pro are bad — they’re hardly so — or even a bad value, knowing how magical the iPad feels, but I wish they did more software-wise.

Here are my takeaways from Tuesday’s “Let loose” Apple event.

iPads Air

The easiest-to-cover announcement was the new iPads Air — plural. Before Tuesday, the iPad Air — Apple’s mid-range tablet — only came in one size: 10.9 inches. Now, the device comes in two sizes: the same 11-inch smaller version, and a new 13-inch form factor. Aside from the size, the two models are identical in their specifications. Both models feature M2 chips, their cameras have been relocated to the horizontal edge to make framing easier due to how most users hold iPads, and new storage options have been added now up to 1 terabyte. The smaller model’s prices also remain the same, starting at $600, and the 13-inch version sells for $750. Starting storage has also been increased to 128 gigabytes, and there is now a 512-GB variant.

The new iPads Air, otherwise, are identical to the last-generation model, with the same camera and screen resolutions and mostly identical accessories support. The first-generation Magic Keyboard from 2020 remains compatible, but the second-generation Apple Pencil from 2018 that worked with the previous model is not. (More on this later.) They both come in four colors — Space Gray, Blue, Purple, and Starlight — and ship May 15, with pre-orders open on Tuesday.

I am perplexed by the iPads Air, particularly the smaller version, which is often more expensive than a refurbished last-generation iPad Pro of the same size. Choosing to buy the latter would be more cost-effective, and the iPads Pro also have Face ID and a 120-hertz ProMotion display. Add to that the better camera system and identical processor, and I truly don’t see a reason to purchase a new (smaller) iPad Air. The larger model is a bit of a different case, since buying a larger refurbished iPad Pro would presumably be more expensive, so I can understand if buyers might want to buy the newer 13-inch iPad Air for its larger screen, but the low-end model continues to be a fantastically bad value.

The M4

Rather than use October’s M3 processor in the new iPads Pro, Apple revealed a new system-on-a-chip to power the new high-end tablet: the M4. Exactly as predicted by Mark Gurman, a reporter at Bloomberg with an astonishing track record for Apple leaks1, the new M4 is built on Taiwan Semiconductor Manufacturing Company’s enhanced second-generation 3-nanometer fabrication process called N3E. The new process will presumably provide efficiency and speed enhancements, but I think they will be negligible due to iPadOS’ limited feature set and software bottlenecks. The processor, by default, is binned2 to a nine-core central processing unit — with three performance cores and six efficiency cores — and a 10-core graphics processor, but users who buy the 1- or 2-TB models will receive a non-binned 10-core CPU with four performance cores. The low-end storage tiers also only have 8 GB of memory, whereas the high-end versions have 16 GB, though both versions still have the same memory bandwidth at 120 gigabytes per second.

John Ternus, Apple’s senior vice president of hardware engineering, repeatedly mentioned during the event that the new iPad Pro would not be “possible” without the M4 chip, but I struggle to see how that is true. The new processor has what Apple calls a “Display Engine,” which Apple only made a passing reference to, presumably because it is not very impressive. As far as I know, the M3’s “Display Engine,” so to speak — which is already present in MacBooks Pro with the M3 — powers two external displays, so I’m having a hard time understanding what is so special about the OLED display found in the new iPads that warrants the upgraded, dedicated Display Engine. (It isn’t even listed on Apple’s “tech specs” page for the iPads Pro, for what it’s worth.)

Whatever the Display Engine’s purpose may be, Apple claims the M4 is “1.5 times faster” in CPU performance than the M2, though, once again, I don’t see a reason for the performance improvements because iPadOS is so neutered compared to macOS. I have never had a performance issue with my M2 iPad Pro, and I don’t think I will notice any difference when I use the M4 model. Other than for the cynical reason of trying to shift more iPad sales during Apple’s next fiscal quarter, I don’t see a reason for the M4’s existence at all. I’m unsurprised by its announcement, but also awfully confused. Expect to see this processor in refreshed Mac laptops in the fall, too.

iPads Pro

The star of the show, per se, was the new iPad Pro lineup, both the 11-inch and 13-inch models. (There is no longer a “12.9-inch” model, which I am grateful for.) Both models have been “completely redesigned” and feature new displays, cases, processors, and accessories. The update is the largest since the complete redesign and nixing of the Home Button and Lightning port in 2018, but it isn’t as monumental as that year’s revamp. From afar, the new models look identical to 2022’s versions, aside from the redesigned camera arrangement, which is now color-matched to the device’s aluminum body à la iPhones, whereas it was previously just made out of black glass. The displays are now “tandem OLED” panels, which use a special technology to fuse two OLED panels for maximum brightness and earn the display a new name of “Ultra Retina XDR.” (The iPhone’s non-tandem OLED display is called “Super Retina XDR,” and the previous generation’s 12.9-inch model’s mini-LED display was called the “Liquid Retina XDR” display.) And just like the iPads Air, the iPads Pro’s front-facing camera has been relocated to the horizontal position.

Most impressively of all, Apple managed to thin the iPads down significantly from their previous girth. Apple, in a Jony Ive-like move, called the new 13-inch model the “thinnest device” it has “ever made” — even thinner than the iPod nano, which held the title previously. Ternus, the Apple executive, also assured that the device didn’t compromise on build quality or durability, though I would imagine the new model is easier to bend and break than before. (Tough feat.) I do not understand the obsession over thinness here, but the new model is also lighter than ever before due to the more compact OLED display. The new iPads Pro are so thin that the Apple Pencil hangs off the edge when magnetically attached to the side, which may be inconvenient when the iPad is set on a table; Thunderbolt cables plugged into the iPad also protrude upward from the body, a consequence of the sheer thinness. One thing is for certain, however: The new iPads Pro do look slick, especially in the new Space Black finish.

The thinness is a byproduct — or consequence, rather — of the new beautiful OLED display found on both models, replacing the LED “Liquid Retina” display of the last-generation 11-inch model and mini-LED display of the 12.9-inch version. While the mini-LED display was able to reproduce high-dynamic-range content with high brightness levels down to a specific “zone” of the panel, it also suffered from a phenomenon called “blooming,” where bright objects on a dark background would display a glowing halo just outside of the object. OLED displays feature individually lit pixels, allowing for precise control over the image, alleviating this issue. The panel’s specifications are impressive on their own: 1,000 nits of peak brightness when displaying standard-dynamic-range content, 1,600 nits of peak localized brightness when content is in HDR, a two-million-to-one contrast ratio, and a ProMotion refresh rate from 10 hertz to 120 hertz. The new display, as Apple says, truly is “the most advanced display in a device” of the iPad’s kind. I would argue it’s one of the most advanced displays in a consumer electronics device, period, aside from probably Apple’s own Vision Pro. It truly is a marvel of technological prowess, and Apple should be proud of its work.

Apple allows buyers who purchase a 1- or 2-TB model the option to coat the display in a nano-texture finish for a $100 premium, which will virtually eliminate glare and provide for a smoother writing and drawing experience when using the Apple Pencil. The finish is the same as found on the Pro Display XDR and Studio Display, and while I don’t think it is for me, I appreciate the option. (I do wonder how wiping away fingerprints would work, though, since this is the first time Apple has applied the coating to a touch device.) One quirk of the nano-texture coating, however, is that it cannot cover the Face ID sensors, located at the side of the iPad Pro, so the finish stops at the edge of the screen itself, displaying a glossy bezel around the display. I think it looks strange, but this problem couldn’t possibly be alleviated without redesigning Face ID entirely.

Apple has made some noteworthy omissions to the product, however. Most distinctly of all, it has removed the ultra-wide lens at the back of the iPad, a lens it added in the product’s 2020 refresh. Personally, I have never once touched the ultra-wide camera, and I don’t know of anyone who did, but it might be missed by some. To compensate, Apple has added a new AI-powered shadow remover to the document scanner in iPadOS, powered by the M4’s improved Neural Engine and a new ambient light sensor, which takes a prominent space in the iPad’s new camera arrangement. I’m unsure about how I feel about its physical attractiveness — there are only so many ways to design a camera on a tablet computer before it gets boring — but I think the swap is worth the trade-off. (The ultra-wide camera at the front added in 2021, which powers Center Stage, has not been removed.) The SIM card slot has also been removed from cellular-equipped models, mirroring its omission from 2022’s iPhone 14 Pro, and the 5G millimeter-wave antenna located at the side has also been axed reportedly due to a lack of usage.

The new models have both received price increases of $200, with the 11-inch model starting at $1,000, and the 13-inch at $1,300. I think those prices are fair; I expected increases to be more substantial due to the cost of OLED panels. Base storage amounts have also been subsequently bumped; the new models begin with 256 GB of storage and are configurable up to 2 TB. They ship May 15, just like the iPads Air, and are available for pre-order beginning Tuesday.

Hardware-wise, the new iPads Pro are truly some of the most impressive pieces of hardware Apple has manufactured yet, and I’m very excited to own one. But I can’t help but ask a simple question about these new products: why? Apple has clearly dedicated immense time, energy, and money to these new iPads, and it’s very apparent from the specifications and advertising. Yet when I unbox my new iPad come next week, I’ll probably use it the same, just as I have always used my iPad. It won’t be any better at computing than my M2 iPad Pro I’ve owned for the last year and a half. The Worldwide Developers Conference in June is where the big-ticket software announcements come, but just as Parker Ortolani, a product manager at Vox Media, said on Threads, we have collectively been waiting for “the next big iPadOS update” since the first iPad Pro was launched in 2015 — before iPadOS even existed. iPadOS is a reskinned version of iOS, and Apple must change that this year at WWDC. Until then, the new iPads, while spectacular from every imaginable hardware angle, lack a purpose.

Apple Pencil Pro and Magic Keyboard

Apple announced updates to its two most popular accessories for the iPad Air and iPad Pro: the Apple Pencil, and the Magic Keyboard. The second-generation Apple Pencil, first announced in 2018, has remained unchanged since its first debut and has been compatible with all high-end iPads since 2020, and the Magic Keyboard, first announced in 2020, has also been kept untouched. Both products on Tuesday received major overhauls: Apple debuted the Apple Pencil Pro, a new product with haptic feedback and a touch sensor for more controls, and the new Magic Keyboard, which is now finished in a sturdier aluminum, has a function key row, and features a redesigned hinge. Both products are strictly only compatible with Tuesday’s iPads; subsequently, prior versions of the Apple Pencil and Magic Keyboard cannot be used with the new iPads Pro or iPads Air, aside from the USB Type-C Apple Pencil released in October, which remains as a more affordable option.

The Magic Keyboard’s redesign, Apple says, makes it a more versatile option for “professional” work on iPadOS. The keys, using the same scissor-switch mechanism as the previous generation, now have a more tactile feel due to the hefty aluminum build, which also adds rigidity for use on a lap — the lack thereof was a pitfall of the older Magic Keyboard. The trackpad is now larger and features haptic feedback, just like Mac laptops, and the hinge is more pronounced, making an audible click sound when shut. The Magic Keyboard also adds a small function row at the very top of the deck, adding an Escape key for anyone bullish enough to code on an iPad. (This would’ve been a great time to put Terminal on iPadOS.) While the new additions will undoubtedly add weight to the whole package, I think the trade-off is worth it because it makes the iPad feel more like a Mac. The new Magic Keyboard retails for the same price: $300 for the 11-inch version, and $350 for the 13-inch one. It, again, ships May 15, with pre-orders available Tuesday.

The Apple Pencil Pro, while not as visually striking of an upgrade as the Magic Keyboard, does build on the foundation of the second-generation Apple Pencil well. That stylus, which Apple still sells for older iPads, features a double-tap gesture, which allows quick switching between drawing tools, such as the pen and eraser. The new stylus builds on the double-tap feature, adding a touch sensor to the bottom portion of the stalk which can be squeezed and tapped for more options. Instead of only double-tapping the pencil, users are now able to squeeze it to display a palette of writing tools — not just the eraser. This integration works in apps that support the new PencilKit features in iPadOS; for those that don’t, the double-tap gesture works just as it did before. To select a tool, it can simply be tapped on the screen like normal with the pencil.

The Apple Pencil Pro also supports a feature called “barrel roll,” which allows users to move their fingers in a circle around the pencil to finely control its angle on the virtual page, just like someone would do with a real pencil. And when squeezing, double-tapping, or using the barrel roll gesture, a new Haptic Engine added to the pencil will provide tactile feedback for selections. Apple also added Find My functionality to the pencil, though it is unclear if it included Precision Finding, the feature that utilizes the ultra-wideband chip in recent iPhones to locate items down to the inch. (I don’t think it did since the iPad doesn’t have a U2 chip.)

The Apple Pencil Pro retails for $130 — the same price as the second-generation Apple Pencil — and is available for pre-order starting Tuesday, with orders arriving May 15. The more comedic aspect of this launch, however, is the new Apple Pencil Compare page on Apple’s website, which looks genuinely heinous. Apple now produces and sells four different Apple Pencils, all with separate feature sets and a hodgepodge of compatibility checks. To review:

  • Apple Pencil Pro: The latest version is compatible with the M2 iPads Air and M4 iPads Pro announced Tuesday. It retails for $130.
  • Second-generation Apple Pencil: The older version of the Apple Pencil is compatible with iPads Pro from 2018 and newer and the fourth- and fifth-generation iPads Air from 2020 and 2022. It is not compatible with any of the new iPads announced Tuesday. It also sells for $130.
  • USB-C Apple Pencil: The new USB-C Apple Pencil from October, which does not have double-tap or pressure sensitivity, is compatible with every iPad with a USB-C port, including the latest models. It is available for $70.
  • First-generation Apple Pencil: This pencil is for compatibility with older, legacy iPads, as well as the now-discontinued ninth-generation iPad. It costs $100.

No reasonable person will choose to remember that information, so Apple has assembled an Apple Pencil compatibility page, which is absolutely abhorrent. There is even a Contact Us link on the page for those who need assistance to figure out the chaos. “Who wants a stylus?”


As I have stated many times throughout this article, I think the new hardware announced Tuesday is spectacular. The new iPads Air fit in well with the lineup, the 10th-generation iPad has received a price reduction of $50, replacing the archaic ninth-generation model which had a Home Button and Lightning port, and the new iPads Pro are marvels of engineering. I think all models are well-priced, I like the new design of the Magic Keyboard, and I’m thankful the Apple Pencil has been updated.

But none of the above overshadows how disappointed I am in the iPad’s software, iPadOS. As good as the new hardware may be, I don’t think I will use it any differently as I do my current iPad now. That’s a shame — for how much work was put into Tuesday’s announcements, the bespoke software for the iPad should do better. Until then, the iPad will continue to remain a product in Apple’s lineup — nothing more, and nothing less.

A correction was made on May 5, 2024, at midnight: An earlier version of this article stated that the new M2 iPad Air supports the second-generation Apple Pencil. That is not true; it only supports the USB-C Apple Pencil and the new Apple Pencil Pro. I regret the error.

A correction was made on May 14, 2024, at 2:11 a.m.: An earlier version of this article stated that the USB-C Apple Pencil was released in March. It was actually released in October of last year. I regret the error.

  1. In Gurman we trust. I’ll never make the mistake of doubting him again. ↩︎

  2. I recommend reading my “Wonderlust” event impressions from September to learn more about processor binning. Skip to the section about the A17 Pro. ↩︎

Semafor Interviews Joe Kahn of The New York Times

Ben Smith for Semafor interviewed Joe Kahn, the executive editor of The New York Times. Here is what Kahn had to say in response to Smith’s question about The Times’ role in saving democracy:

It’s our job to cover the full range of issues that people have. At the moment, democracy is one of them. But it’s not the top one — immigration happens to be the top [of polls], and the economy and inflation is the second. Should we stop covering those things because they’re favorable to Trump and minimize them? I don’t even know how it’s supposed to work in the view of Dan Pfeiffer or the White House. We become an instrument of the Biden campaign? We turn ourselves into Xinhua News Agency or Pravda and put out a stream of stuff that’s very, very favorable to them and only write negative stories about the other side? And that would accomplish — what?

I think The New York Times has completely misunderstood what “independent journalism” is. Kahn and other Times journalists, whose work I read regularly, think of us — those accusing The Times of journalistic malpractice — as wanting them to favor the Biden administration or to be against former President Donald Trump somehow. That couldn’t be farther from the truth — it is my firm belief that news shouldn’t be biased toward a political candidate.

News, however, should be biased toward the truth, and The Times warps the truth however it wants to fit the public’s narrative. That’s exactly what Kahn is doing here by using the polls as a determinant for what to cover and how to cover it. I understand the core message: that America’s most respected newspaper should cover America’s problems. But, oftentimes, America’s problems and the way it interprets them are disconnected from reality. It is the job of the country’s newspapers of record to influence public opinion, not report on only what Americans seem to care about.

It’s the job of the news media to report the facts without subjectivity, and Kahn clearly knows this and restates it multiple times throughout the interview. But, Kahn also released this piece of truth: “I think the general public actually believes that he’s responsible for these wars, which is ridiculous, based on the facts that we’ve reported,” referring to President Biden. If the public, by Kahn’s own admission, is so foolish to believe Biden started the wars in Europe and the Middle East, why should The Times’ newsroom cover reality through the public’s (incorrect) lens, as Kahn says The Times is doing?

The Times’ job is to cover reality, regardless of whether it favors the incumbent or his predecessor. Currently, it’s not doing that. It’s warping the news to please its audience, which is not news-making. Once again, my request is not for The Times to be a knight defending democracy by praising Biden’s every move. I want it to be objective in its reporting. Currently, it isn’t — and I feel like that is on purpose.

AI at Next Week’s Apple Event?

Apple announced its earnings for the second quarter on Thursday, and Tim Cook, the company’s chief executive, interviewed with CNBC. CNBC wrote the following:

Cook also said Apple has “big plans to announce” from an “AI point of view” during its iPad event next week as well as at the company’s annual developer conference in June.

I don’t even understand why this was reported on, because artificial intelligence is the new craze both in Silicon Valley and Wall Street. Of course the chief executive of the world’s second-largest technology company — which reported revenue down 4 percent this quarter — would try to pump his stock price, and of course he would do that by saying there will be an AI-related announcement at next week’s hotly anticipated Apple event. It makes logical sense from a business perspective: If Cook can motion investors to hold off on dumping Apple stock this week, he can launch new iPads next week, point to the sales numbers, and watch the stock hike again. That is his job.

Later, CNBC retracted its original quote, but gave the full context to Zac Hall, editor at large at 9to5Mac, somehow:

We’re getting into a period of time here where we’re extremely excited like I’m in the edge of my seat literally because next week, we’ve got a product event that we’re excited about. And then just a few weeks thereafter, we’ve got the… Worldwide Developers Conference coming up and we’ve got some big plans to announce in both of these events. From an AI point of view…

Cook is not saying there will be AI-related announcements at these events, he is just saying (a) that there are “big plans” and (b) there will be announcements some time between now and the end of eternity “from an AI point of view.” Those are mutually exclusive statements — it is foolish to assume otherwise because Cook is well-trained before he sits in front of the media. Apple never reveals what it will announce before an event, even when it would be in the interest of the stock price.

So, that all begs the question: Will there be AI at next week’s event or not? It’s impossible to say conclusively, but I think there will certainly be mentions of AI during the presentation. However, I do not believe Apple will announce AI software of its own just a month before WWDC, where software is usually debuted. I imagine the AI references will be limited to passing mentions of how the new iPads Pro are “great for AI computing” and how you can run AI models with apps on the App Store, just like the “Scary fast” Apple event from October, where the company announced the M3 MacBooks Pro. The mentions will exist to please investors and to hold them off just a bit longer for WWDC, where the big-ticket AI features will be introduced via iOS 18.

Thursday’s keynote will not be a preview of AI features — or at least, so I think. Instead, it looks like it’ll serve as a filler event to build anticipation for the true announcements coming in the summer, while also finally refreshing the iPads, which is long overdue. This scenario also takes into account Mark Gurman’s report for Bloomberg on Sunday that said Apple will ship the M4 in the new iPads Pro: M4 or not, this event is slated to be hardware-focused, and I think the only AI references next week will exist to appease Wall Street. My final take: No AI at next week’s event.

The Rabbit R1 is Just an Android App

Mishaal Rahman, reporting for Android Authority on Tuesday:

If everything an AI gadget like the Rabbit R1 can do can be replicated by an Android app, then why aren’t these companies simply releasing an app instead of hardware that costs hundreds of dollars, requires a separate mobile data plan to be useful, and has terrible battery life? It turns out that’s exactly what Rabbit has done… sort of.

See, it turns out that the Rabbit R1 seems to run Android under the hood and the entire interface users interact with is powered by a single Android app. A tipster shared the Rabbit R1’s launcher APK with us, and with a bit of tinkering, we managed to install it on an Android phone, specifically a Pixel 6a.

Once installed, we were able to set up our Android phone as if it were a Rabbit R1. The volume up key on our phone corresponds to the Rabbit R1’s hardware key, allowing us to proceed through the setup wizard, create a “rabbithole” account, and start talking to the AI assistant. Since the Rabbit R1 has a significantly smaller and lower resolution display than the Pixel 6a, the home screen interface only took up a tiny portion of the phone’s display. Still, we were able to fire off a question to the AI assistant as if we were using actual Rabbit R1 hardware, as you can see in the video embedded below.

The Rabbit R1, just like the Humane Ai Pin, is nothing more than a shiny object designed to attract hungry venture capitalists. The entire device is an Android app, a low-end MediaTek processor, and a ChatGPT voice interface wrapped up in a fancy orange trench coat — in other words, nothing more than a grift that retails for $200. I’ve said this time and time again: These artificial intelligence-powered “gadgets” are VC money funnels whose entire job is to turn profits then disappear six months later when Apple and Google add more broad AI functionality to their mobile operating systems. In the bustle of the post-October 2022 AI sphere, Rabbit raised a few million dollars in Los Angeles, built together an Android app with a rabbit animation, bulk bought some off-the-shelf cheap electronics from China, engineered a bright orange case, put the parts together, made its founder dress up like an off-brand Steve Jobs, and poof, orders started flooding in by the thousands. Ridiculous.

The Rabbit R1, in many ways, is more insulting than the Humane Ai Pin, which I’ve already bashed enough. It is significantly more affordable, priced at $200 with no subscription — unlike Humane’s $700, $24-a-month product — but it is quite literally worse than the Ai Pin from Rabbit’s chief rival VC funnel in every metric. The entire device, as Marques Brownlee, a YouTuber better known as MKBHD, demonstrated in his excellent review of the device, is a ChatGPT wrapper with an ultra-low-end camera and a knob — or wheel, rather — used in favor of a touch screen presumably to make it seem less like a smartphone. In practice, it is a bad, low-end smartphone that does one thing — and only one task — extraordinarily poorly, consistently flubbing answers and taking seconds to respond. It is a smartphone that does everything poorly aside from looking great. (Teenage Engineering designed the Rabbit R1; I’ll give the product design props.) I am astonished that we are living in a world where this $200 low-end Android smartphone is receiving so much media attention.

Rahman contacted Jesse Lyu, Rabbit’s chief executive and co-founder, for comment on his article, and Lyu, grifter-in-chief at Rabbit, naturally denied the accusations in the stupidest way possible. I don’t even understand how this made it to publication; it’s genuinely laughable. Lyu’s justification for the device is that Rabbit sends data and queries to servers — presumably its own servers — for processing. Here is a non-comprehensive list of iOS apps with large language models built in that send data to the web for processing: OpenAI’s ChatGPT, Microsoft Copilot, Anthropic Claude, and Perplexity — also known as every single AI processing app made by a large corporation because it is all but impossible to run LLMs on even the most sophisticated, powerful smartphone processors, let alone any random inexpensive MediaTek chip, such as found on the R1. The Rabbit R1 is an Android app that exchanges data with the internet with a cellular radio and some network calls. Any 15-year-old could engineer this in two weeks from the comfort of their bedroom.

I aggressively smeared the Humane Ai Pin not because I thought it was a grift, but because I thought it had no reason to exist. I thought and still think that Humane built an attractive piece of hardware and that the company still has conviction in creating a product akin to the smartphone in the hopes of eventually eclipsing it. (I think this entire idea is flawed, and that Humane will eventually go bankrupt, but at least Humane’s founders are set on their ambition.) Rabbit as an entire company, by stark contrast, is built on a throne of lies and scams: It came out of the woodwork randomly during the Consumer Electronics Show in January after raising $10 million the month prior from over-zealous VC firms, threw a launch party in New York with influencers and press alike, then shipped an Android app to consumers for $200. It’s a cheap smear of hard-working, dedicated hardware markers; it makes a mockery of true innovators in our very complicated technology climate in 2024. These “smartphone replacement” VC attractions ought to be bankrupt by, if not right after, June.

Ridiculous Rumor of the Week: M4 Chips in New iPads Pro

Ridiculous, but quite possible. Mark Gurman, reporting for Bloomberg in his Power On newsletter:

Earlier this month, I broke the news that Apple is accelerating its computer processor upgrades and plans to release the M4 chip later this year alongside new iMacs, MacBook Pros, and Mac minis. The big change with the M4: A new neural engine will pave the way for fresh AI capabilities. Now here’s another development. This year’s Macs may not be the only AI-driven devices with M4 chips.

I’m hearing there is a strong possibility that the chip in the new iPad Pro will be the M4, not the M3. Better yet, I believe Apple will position the tablet as its first truly AI-powered device — and that it will tout each new product from then on as an AI device. This, of course, is all in response to the AI craze that has swept the tech industry over the last couple years.

By introducing the new iPad Pro ahead of its Worldwide Developers Conference in June, Apple could lay out its AI chip strategy without distraction. Then, at WWDC, it could focus on how the M4 chip and new iPad Pros will take advantage of the AI software and services coming as part of iPadOS 18 later this year. I fully expect Apple to position the A18 chip in the iPhone 16 line as built around AI as well.

To be fair, though, these new products aren’t engineered and developed entirely around AI. This is partly about marketing. Hardware with even more impressive capabilities is further out. As I’ve reported, Apple is working on a table-top iPad connected to a robotic arm, as well as a home robot.

For context, the M3 line of processors debuted in late October last year, so it has only been roughly six months since the latest generation of Apple’s high-end processors came into the market. Every single bone in my body disagrees viscerally with every aspect of this rumor — it does not make sense from any logical perspective whatsoever because there is no way Apple would sell an iPad Pro that is faster and more capable than the MacBook Air and base-model MacBook Pro. It would be genuinely embarrassing for it to sell a device that runs iPadOS — a moderately enhanced version of iOS — with a more powerful chip than the $1,600 MacBook Pro.

That bit of illogical thinking is, however, small compared to the timeframes we’re working with here: Apple has never produced two full generations of Apple silicon only six months apart. With Taiwan Semiconductor Manufacturing Company’s factories booked trying to meet M3 3-nanometer process node demand, I have a tough time believing TSMC can fabricate more 3-nm chips to meet the demand for iPads Pro and Mac laptops. Also, the M3 processor is based on the architecture of the A17 Pro, which first debuted in September in iPhones 15 Pro, so the M4 would have to be based on the eventual A18 Pro (or A18, whatever it may be called), which has not even been announced yet — it will be announced this September. Historically, Apple has always based the Mac’s Apple silicon chip on that year’s iPhone chip.

And about the “artificial intelligence” focus: I genuinely can’t see Apple marketing the new iPads Pro — which are slated to debut at Apple’s virtual May 7 event — as “AI-focused” without first announcing AI features as part of iPadOS in June, at the Worldwide Developers Conference. What would potential buyers do with the AI-focused Neural Engine in iPadOS? This entire rumor is bending my mind because it seems so impossible and brazen. The M4, historically speaking, has no business being in these new iPads Pro — period — because no publicly available software can take advantage of the new Neural Engine on iPadOS, the M4 would stretch TSMC’s fabrication facilities to their max, and the M4’s debut in May would not line up with Apple’s product timelines. It’s completely nonsensical.

I trust Gurman — I truly do. There is not a time in recent memory when he has been wrong. But Gurman has been seesawing non-stop on this Apple event, which he earlier said wouldn’t even be an event in the first place. He also said that the iPads would be announced in March or April, but the event is taking place in May. Though he corrected some of these rumors later on, I heavily doubt his reporting here. If Apple truly does announce the M4 chip in the new iPads Pro in May, I won’t be shocked, because Gurman said it — but for now, I’m choosing to take this rumor with a grain of salt.

Also, one final note from Gurman’s newsletter this week, serving as some follow-up to my writing on Saturday about Apple Vision Pro:

Vision Pro demand has dropped considerably at many Apple stores. One retail employee says they haven’t seen one Vision Pro purchase in weeks and that the number of returns equaled the device’s sales in the first month that it was available.


My Answers to Apple’s ‘Market Research’ Vision Pro Survey

Apple emailed me on Friday asking me how I’ve been enjoying my Apple Vision Pro. Here is what I wrote to the company.

Apple: Tell us why you’re not satisfied with your Apple Vision Pro.

Me: My main problem with Apple Vision Pro is its lack of content. Plain and simple, there is not much to do with it. I bought it because part of my job is to write about technology, but I probably wouldn’t have if I didn’t have that motive. Apple Vision Pro, a few months later, suffers from a lack of a use case. Everyone knows what to do with their iPhone, Mac, or iPad — but Apple Vision Pro? You might watch a movie, play a game or two, or fiddle around with the operating system, visionOS. But it’s restricted computing-wise in the same way the iPad is, which makes it impossible to use as a Mac replacement; it’s not sharable in the same way a TV is; and it’s not even nearly as easy to use as the iPad or iPhone, both of which can be used just by picking them up and tapping the screen.

Apple Vision Pro, from the get-go, is complicated to use. You have to ensure it has enough charge to use it since it doesn’t have very good standby battery life, then connect the battery, adjust the strap, place it on your head, adjust your hair, adjust the strap again, ensure it fits well, then unlock it and begin using it. For all of that to be worth it, there needs to be a seriously compelling reason to put it on. If something can be done 90 percent as enjoyably with an iPad or iPhone, most people — including myself — will just use that over Apple Vision Pro. Each time I use it, it’s a game of calculus: Is it worth it to do all this for the 10 percent of joy I’ll get?

What Apple wants people to think is that Apple Vision Pro does more than a traditional computer — not that Apple Vision Pro does what a traditional computer does but better. In practice, Apple Vision Pro does what a normal computer or mobile device does — but exponentially worse. Not only is it a hassle to use most of the time, but its software — based on iPadOS — functions in the same crippled ways iPadOS does. And with the lack of enthusiasm from third-party developers, the product is even more lackluster.

None of this is to say Apple Vision Pro is a lackluster product — it clearly isn’t. Every time I use it, I generally enjoy my time fiddling with new applications and experiences. But Apple sells many computers in various form factors, and most of those devices do the job of Apple Vision Pro just good enough that it’s not worth the effort to wear the headset most of the time. This is a solvable problem: Just make more content. Apple needs to incentivize third-party developers to make more experiences, produce more content itself, and improve the software experience to make computing better. For example, multitasking is impossible on visionOS, even though the inherent nature of Apple Vision Pro could make it a more capable Mac with an infinite amount of screen space. Why is using windows such a hassle on visionOS when they could be spectacular on this revolutionary spatial computer?

I particularly enjoyed the Major League Soccer highlight reel published in March and some of the other Apple-made immersive videos available through the TV app on visionOS. There should be much, much more of that kind of content available for paying subscribers. I know Apple has enthusiasm for this product, but looking at visionOS does not make that apparent.

Apple: What types of video content are you most interested in watching on Apple Vision Pro?

Me: Immersive video content, such as the MLS soccer highlight reel, available to all Apple Vision Pro users. Flat, 2D content isn’t as appealing because other devices can view it just fine, but immersive content is absolutely outstanding.

Apple: What one thing, if anything, would you add to or change about Apple Vision Pro?

Me: Make it lighter. The weight adds so much discomfort to using the product. It’s hard to use while lying down, uncomfortable while perfectly upright, and moderately uncomfortable while in a slightly reclined position — which is currently the most advisable way to use it. It rests on the cheeks and forehead evenly, but also terribly. It’s fatiguing to use for long periods of time. It needs to be lighter.

The ByteDance Ban is Here

Sapna Maheshwari, David McCabe, and Cecilia Kang, reporting for The New York Times:

Just over a year ago, lawmakers displayed a rare show of bipartisanship when they grilled Shou Chew, TikTok’s chief executive, about the video app’s ties to China. Their harsh questioning suggested that Washington was gearing up to force the company to sever ties with its Chinese owner — or even ban the app.

Then came mostly silence. Little emerged from the House committee that held the hearing, and a proposal to enable the administration to force a sale or ban TikTok fizzled in the Senate.

But behind the scenes, a tiny group of lawmakers began plotting a secretive effort that culminated on Wednesday, when President Biden signed a bill that forces TikTok to be sold by its Chinese owner, ByteDance, or risk getting banned. The measure, which the Senate passed late Tuesday, upends the future of an app that claims 170 million users in the United States and that touches virtually every aspect of American life.

For nearly a year, lawmakers and some of their aides worked to write a version of the bill, concealing their efforts to avoid setting off TikTok’s lobbying might. To bulletproof the bill from expected legal challenges and persuade uncertain lawmakers, the group worked with the Justice Department and White House.

And the last stage — a race to the president’s desk that led some aides to nickname the bill the “Thunder Run” — played out in seven weeks from when it was publicly introduced, remarkably fast for Washington.

“Thunder Run” is a McCarthyist First Amendment violation straight from the Second Red Scare. This law was only able to pass because it was attached to a much-needed foreign aid appropriations bill, funding Ukraine and Israel and providing billions of dollars of humanitarian aid to vulnerable populations. It was included to ensure broad support within the Republican party — a compromise House Speaker Mike Johnson of Louisiana had to make to ensure members of his party would support the aid package. Republicans aren’t known for being the smartest of people, but it’s wrong to solely place the blame on their antics this time. Moderate Democrats played a hand in pushing the bill over the finish line, effectively stripping half the country of their First Amendment rights.

The government has yet to provide concrete evidence of a national security threat, which is strange, because the only sound legal argument for this law is national security. ByteDance is owned and effectively controlled by the Chinese Communist Party, and there is potential for the Chinese to compromise the security of the United States with access to hundreds of millions of Americans’ phones. Yet, there is zero evidence of this happening in actuality — I’m not saying that it isn’t happening, but there is no evidence for the public to see. When this law is challenged in court — and it absolutely will be — this will be the primary aspect of the case, as silencing speech because “terrorist content is promoted” is easily one of the most illegal things Congress can ever do. From the First Amendment of the Constitution:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

That is the entire amendment, and there is no need for more because it’s extremely descriptive. Congress cannot make a law that prohibits the “free exercise of speech” — period, no matter what that speech is — unless it’s in the interests of the government’s job of protecting the American people. If there is no attempt to prove that this law is in support of that job, this bill should get tossed straight into a fire. There is a reason that yelling “I’m going to bomb this airplane!” in an airport is not considered “free exercise of speech,” so the same logic applies here: The government must provide tangible proof that TikTok’s continued Chinese ownership is an equivalent danger to someone yelling “I’m going to bomb this airplane!”

When the government makes that argument, it cannot use the speech on TikTok as proof, because no matter the speech, speech alone cannot impede national security. For example, 4chan is filled to the brim with unfavorable, illegal speech, but the government can’t directly punish the platform owner for “threatening national security” due to Section 230 of the Communications Acts of 1934 and 1996 which give platform owners immunity from what people say on them. Forcing the divestiture of a company because of speech on a platform — illegal or not; supporting terrorists or communists is not illegal — is a blatant violation of the First Amendment and has been proven case after case before the Supreme Court.

That leaves just one more question: Why won’t ByteDance divest TikTok? It’s a good question that I’ve also been pondering because I’m not one to favor Chinese control over one of the largest social media platforms in the United States. I want a TikTok divestiture, but I don’t want it to be forced by the government. I’ve come to this conclusion: ByteDance won’t ever divest TikTok not because it doesn’t want to, but because it legally cannot due to its Chinese ownership and control. The CCP is truly an authoritarian government, and TikTok is its best way of manipulating the public’s image of it, so it’s willing to let a Chinese company suffer financially if it means displaying a facade of strength in front of the United States. The CCP doesn’t care about the money ByteDance makes — it’s communist — but it does care about the data ByteDance generates. It wants power, and the best way for it to truly change the public’s perception of it is by threatening the public’s favorite social media platform.

Naturally, if TikTok vanishes in a year — a prospect that I think is still thoroughly unlikely — Americans will solely place the blame on their government, not on TikTok or China. And that point of contention between Americans and their government is exactly the reason why China doesn’t want to divest TikTok. The Chinese government wants power and strength; it wants to change the way Americans perceive it across the Pacific. This bill just gave China a brand new, effective strategy. Nice work, Washington — you’ve been outsmarted by Beijing again.

TSMC’s American ‘Debacle’

Viola Zhou on Tuesday reported an extremely thorough, quite lengthy piece about the cultural differences and headaches in Taiwan Semiconductor Manufacturing Company’s new semiconductor fabrication plant in Arizona, typically called a “fab.” This was my favorite part that truly exemplifies the differences between work culture in the East and the West:

At Fab 18, nearly all communication took place in Taiwanese and Mandarin Chinese, the two most widely spoken languages in Taiwan. The Americans found it difficult to understand meetings, production guidelines, and chatter among local engineers. In theory, every American was supposed to have a Taiwanese buddy — a future Arizona worker who would help them navigate the workplace. But the Americans said their buddies were often too busy to help with translations, or else not familiar enough with the technical processes because they were freshly transferred from other production lines.

Many trainees, including Bruce, relied on Google Translate to get through the day, with mixed results. Technical terms and images were hard to decipher. One American engineer said that because staff were not allowed to upload work materials to Google, he tried to translate documents by copying Chinese text into a handwriting recognition program. It didn’t work very well…

TSMC’s work culture is notoriously rigorous, even by Taiwanese standards. Former executives have hailed the Confucian culture, which promotes diligence and respect for authority, as well as Taiwan’s strict work ethic as key to the company’s success. Chang, speaking last year about Taiwan’s competitiveness compared to the U.S., said that “if [a machine] breaks down at one in the morning, in the U.S. it will be fixed in the next morning. But in Taiwan, it will be fixed at 2 a.m.” And, he added, the wife of a Taiwanese engineer would “go back to sleep without saying another word.”

During their visit, the Americans got a taste of the company’s intense work culture. To avoid intellectual property leaks, staff were banned from using personal devices inside the factory. Instead, they were given company phones, dubbed “T phones,” that couldn’t be connected to most messaging apps or social media. In one department, managers sometimes applied what they called “stress tests” by announcing assignments due the same day or week, to make sure the Americans were able to meet tight deadlines and sacrifice personal time like Taiwanese workers, two engineers told Rest of World. Managers shamed American workers in front of their peers, sometimes by suggesting they quit engineering, one employee said.

This story reported a challenge with chip manufacturing in the United States that I hadn’t considered until now: cultural differences. Semiconductor manufacturing — like any manufacturing — is a very male-dominated industry, and also one that happens to be low-paying in the East with frankly atrocious working conditions, so bringing that industry to the West, where workers expect shorter work hours and more humane treatment, is difficult. Later in the story, Zhou reports how female co-workers were mistreated, how employees weren’t permitted to bring their phones to work, and how there was a disconnect between the Taiwanese managers and American workers. I suggest you read the piece in its entirety.

The obvious solution to this problem is for the managers themselves to be American, but that’s unfeasible as of now because someone needs to train those managers for the job. Unlike other multinational corporations that have operated in the United States for decades, TSMC needs to first train the Americans to a level of seniority before they can take over the plants entirely. This creates a major bottleneck in the form of a chicken-and-egg problem: TSMC needs American workers to function as supervisors, but Americans want to leave their jobs at TSMC because of the Taiwanese managers. It’s a tough problem to solve, but one that I think can be ironed out with some changes to C-suite leadership.

Unlike the rank-and-file managers who presume authority over the day-to-day operations of the Arizona plant, C-suite executives in Taiwan should be able to rectify this issue by training the Taiwanese managers better. The onus shouldn’t be on the Americans to change — the Taiwanese need to better adapt to the Americans’ way of work. The United States will never be Taiwan or China, and I think TSMC management understands that. The U.S. government is providing the funding, the clients are providing the orders — now, it’s time for TSMC to change how it manages its employees to better its recruiting strategy.

These managers need a change of attitude because that’s inherently the main job of management — to adapt to workers’ preferences. If TSMC doesn’t change the way it controls its workers, the projects will fall apart fast. Funding comes once projects get off the ground, especially if President Biden wins re-election this fall, but for the projects to succeed, workers need to be satisfied. TSMC’s Glassdoor ratings are not desirable for the No. 1 semiconductor manufacturer in the world. Americans care a lot about the culture of the company for which they work, and TSMC needs to understand that and better adapt to American work culture.

Google Fires 28 Employees Protesting Involvement With Israel

Alex Heath, reporting a politically heated story for The Verge on Wednesday:

Google fired 28 employees in connection with sit-in protests at two of its offices this week, according to an internal memo obtained by The Verge. The firings come after 9 employees were suspended and then arrested in New York and California on Tuesday.

The fired employees were involved in protesting Google’s involvement in Project Nimbus, a $1.2 billion Israeli government cloud contract that also includes Amazon. Some of them occupied the office of Google Cloud CEO Thomas Kurian until they were forcibly removed by law enforcement. Last month, Google fired another employee for protesting the contract during a company presentation in Israel.

In a memo sent to all employees on Wednesday, Chris Rackow, Google’s head of global security, said that “behavior like this has no place in our workplace and we will not tolerate it…”

He also warned that the company would take more action if needed: “The overwhelming majority of our employees do the right thing. If you’re one of the few who are tempted to think we’re going to overlook conduct that violates our policies, think again. The company takes this extremely seriously, and we will continue to apply our longstanding policies to take action against disruptive behavior — up to and including termination.”

I say this as a liberal, somewhat progressive person politically: Google had every single right to fire each last one of these “protestors.” Gaby Del Valle for The Verge also reported earlier on Wednesday that the demonstrators occupied Google’s offices illegally, even after they were asked to leave by management, which led the company to call the police to arrest nine of them. A man who works for Google said to the protestors, as quoted by The Verge: “We’re asking you to leave again for the last time.” Then, when they stayed, a police officer offered the demonstrators a plea deal of sorts: “Listen, we’ll let you walk out the door right now — it’s a non-issue if you’re willing to go. If not, you’re going to be arrested for trespassing.”

Every one of the 28 employees who was fired Wednesday evening was given multiple chances to leave a secured, locked building that they were not permitted to use for demonstrations, but they flagrantly violated orders given by a representative for the owner of the building, who also happens to be their employer. If that isn’t a reason for termination of employment, I do not know what is. This type of lunatic behavior would get any employee fired because the protestors engaged in illegal activity. It’s trespassing — illegally occupying a building when the owner gives repeated instructions to leave. When that owner is your employer, termination is a fair punishment.

Those complaining about “free speech,” like many right-wingers, do not actually understand what free speech means in this context. Employees are permitted to protest, especially against their employer, for a variety of reasons. That is protected speech under the Constitution as long as the protestors don’t cause a disturbance, also known as “disorderly conduct” in criminal law. Trespassing goes beyond disorderly conduct; it’s a felony offense to occupy a building when disallowed. From what is known currently, it doesn’t seem that any protestors were charged with crimes — they were simply fired for staging a rogue protest against their employer. Trespassing and causing a disturbance is enough of a reason to fire an employee according to the law. If an employee randomly stood up from their desk and began shouting, they would be reprimanded.

Online activists are calling this protest “peaceful,” when breaking the law is exactly what makes a protest the opposite of “peaceful.” No matter what the protest was for — Black Lives Matter, LGBTQ rights, or Israel’s military campaign in Gaza — a protest in a building where demonstrators are unauthorized to be is illegal, and therefore, punishable. Punishing an employee for failing to obey orders is a right given by the Constitution’s First Amendment to Google, and a corporation disciplining an employee or contractor for psychotic behavior is protected legally under free speech rights in the United States. Nobody has a right to be upset about how the protestors were treated in this case — while their cause’s importance can be debated until the end of time, their actions are undoubtedly flawed.

I truly cannot believe people who managed to land a job at one of the world’s largest technology firms are so stupid that they occupied the private office of their chief executive as if they were rioters on January 6, when a mob of Republican supporters stormed the Capitol in Washington to stop the certification of the 2020 election. The blatant lawlessness exhibited in this protest is appalling and should be condemned in the strongest terms. A functioning democracy necessitates the right to protest, but this wasn’t any ordinary protest — it was a stunning spectacle of incompetence, mindlessness, and arrogance unlike one displayed in Silicon Valley before. “Big Tech” employees have protested via many walk-outs, sit-ins, and other protests to reject their employers' policies, but they have always done so peacefully and respectfully, inspiring change for everyone in a dignified manner. This was the complete opposite.

Condemning the protestors isn’t an endorsement of Israel’s actions in Gaza, Google’s deal with the Israeli government, or the U.S. government’s foreign policy with respect to Jerusalem. Anyone who appreciates dignity and the right to protest in the workplace should be ashamed of Wednesday’s events because they demonstrate the rogue, nonsensical mentality of pro-Palestinian mobsters who are taking rights away from peaceful protestors with their illegal actions. In addition to breaking trespassing laws, they chanted “From the river to the sea, Palestine will be free!” a slogan deemed antisemitic by many, presumably including Jewish people at Google, and pinned banners to the wall with antisemitic language. Google has the right to maintain company policy and remove employees who disrupt the workplace with hateful messages, regardless of what political ideologies those messages are linked to. Google is a company made of people, and if they feel disrespected, they have the right to take action.

For the sacred right to peacefully protest in the United States to remain intact, protestors need to remain respectful and mindful of their neighbors. If they aren’t, the country risks another January 6 — but this time, much, much worse. Political violence and lawlessness are never acceptable.

Wyden, Lummis: Warrantless FISA Searches Are Authoritarian

Gaby Del Valle, reporting for The Verge:

Sens. Wyden and Lummis introduce an amendment limiting FISA’s warrantless wiretapping powers. The amendment would reverse a provision included in the recent House bill reauthorizing Section 702 of FISA that expands the definition of “electronic communications service provider,” which critics say would force Americans to essentially spy for the government.

“Forcing ordinary Americans and small businesses to conduct secret, warrantless spying is what authoritarian countries do, not democracies,” Wyden said in a statement.

The House of Representatives recently re-authorized Section 702 of the Foreign Intelligence Surveillance Act, which, ironically, allows the Federal Bureau of Investigation to perform clandestine searches of American citizens on American soil without a warrant or their consent. The government justifies this by saying FISA is a critical national security tool to prevent foreign attacks on the United States, but the act is mostly used to surveil Americans, not foreigners. If FISA is not re-authorized by Saturday, the government can no longer spy on its citizens — foreign and abroad — however it pleases without even a warrant to justify its actions.

Senator Ron Wyden, Democrat of Oregon, and many progressive Democrats in the House who unsuccessfully voted against re-authorizing FISA have expressed concerns about this warrantless searching, but nevertheless, it seems increasingly likely that the original version of Section 702 will be re-authorized by the weekend. But the new version of FISA, which extends the program for two years, also includes an amendment courtesy of House Republicans to expand the definition of “electronic communications providers,” i.e., companies that are required to provide data when the government requests it. This definition, until now, has only included smartphone makers and other telecommunications companies, like Apple and Google, but the latest House amendment that passed also requires cloud computing firms to provide data.

This effectively means that the government, from now on, will be permitted to ask Amazon Web Services for all the user data associated with one account, which could include practically an entire person’s business life because most people and small businesses use one cloud provider to host their website or other business tools. Google Cloud, AWS, and even Apple’s iCloud servers are all susceptible to this unprecedented, warrantless searching, which, as Wyden and Senator Cynthia Lummis, Republican of Wyoming, say, is precisely what authoritarian regimes do. Russia and China employ this exact kind of surveillance to silence their people, and it is absolutely astounding to me that nobody has taken this law to court over a violation of the Fourth Amendment.

If the new amendment is passed — which seems increasingly likely based on the bipartisan support the latest bill has enjoyed in both the House and Senate — it would open Americans up to a brand new front of dangerous government surveillance akin to communist regimes like China. I don’t think Congress should axe FISA entirely, unlike many progressives and far-right “Make America Great Again” Republicans, but I think there should be an amendment to prevent warrantless searches. And Wyden and Lummis’ amendment in the Senate, in an ideal world, should pass, because allowing the government to surveil data stored in the cloud is blatant government overreach.

Get Ready, Everybody. The E.U. is About to Do Something Stupid.

Foo Yun Chee, reporting for Reuters:

Meta Platforms and other large online platforms should give users an option to use their services for free without targeted advertising, EU privacy watchdog the European Data Protection Board said on Wednesday.

The EDPB’s opinion came after it was asked by national privacy regulators in the Netherlands, Norway, and Germany to look into consent or pay models adopted by large online platforms such as Meta.

“If controllers do opt to charge a fee for access to the ‘equivalent alternative’, they should give significant consideration to offering an additional alternative. This free alternative should be without behavioural advertising,” EDPB said in a statement.

The board’s ruling on Wednesday gives the European Commission, the European Union’s executive body, the ability to force Meta and “other large online platforms,” like Google, to provide their services for free without any targeted advertising. In other words, this outlandish decision allows the commission to dictate how a corporation makes money, even if the “recommended” method is non-viable. The non-technical equivalent of this ruling is the government telling a baker they can’t price their bread at $5 because it thinks that is too expensive. The government doesn’t actually know that the ingredient cost per loaf is $4.75, but it also doesn’t care to find out, so it just punishes the baker for selling the bread at a 25-cent profit even though the baker needs the 25 cents to continue their operations. It is entirely unfair.

The European Commission has been waiting on this ruling since March so it can begin to force Meta to offer non-targeted advertising for free, instead of forcing consumers to subscribe to the ad-free versions of Instagram and Facebook as Meta currently does in response to the Digital Markets Act, which went into effect in early March. E.U. users can choose to pay Meta 10 euros (around $11) monthly to remove all ads — including the targeted ones — because the bloc’s DMA forces “Big Tech” to offer users the ability to disable targeted advertising somehow. The subscription is Meta’s way of sneaking around the true intention of the legislation, which is for companies to offer a simple toggle switch for users to disable targeting for free. The commission didn’t like Meta’s clever idea, so it complained to the EDPB, which, naturally, ruled in the commission’s favor.

It’s important to keep in mind that the DMA does not specifically state that this scheme — which is known as a “pay or OK” tactic inside Brussels — is illegal, or that it shouldn’t be employed. The law simply requires there to be some way for users to opt out of targeted advertising, even if that method is asking for payment since the legislation says nothing about payment entirely. But commissioners, prominently Margrethe Vestager, the commission’s antitrust chief, have decided Meta’s perfectly legal compliance with the DMA is too unacceptable, so they have begun re-interpreting the law to enforce it at their whim. It’s the same old tactic the European Union has been playing for months. As I wrote in March, the commission is “playing a one-sided, rigged game while laughing manically in the corner at everyone falling face-flat on the ground.”

Back to Wednesday: Now, with pesky legality out of the way, the commission is free to push Meta against the wall and choke its neck until it offers its services practically for free, since Meta’s primary revenue source is effective advertisement targeting. Without targeted advertisements, Meta’s average revenue per user drops significantly since advertisers want to ensure their products get placed in front of prospective buyers’ eyes — but thanks to the European Union, if Meta wants to continue operating in the bloc, it needs to take a loss on offering services to Europeans. And if Meta does the algebra and determines it’s not worth it to stay in the European Union, the commission will surely take Meta to court for leaving for it being somehow anti-consumer, even though it’s the commission’s fault in the first place for forcing the company to leave.

The only way for Meta to remain profitable in the European Union is for the company to inundate users with terrible advertisements in the hopes that it can make up for the lack of targeted advertising by simply selling more advertisements at a lower price. That move, however, would also probably draw the ire of regulators in Brussels, who clearly have nothing better to do than go after technology companies which not one ordinary European is complaining about. Europeans can already opt out of targeted advertising for free by using App Tracking Transparency, Apple’s technology for disguising a tracking identifier companies like Meta use to keep track of users across the web. But privatization does not seem to be in the commission’s interests, so it has instead opted to bully any company doing business in its sacred bloc for no reason other than politics. Is there seriously no way for Europeans to object to this madness?

Don’t Trust Anything on the Internet

Julian Barnes, reporting for The New York Times:

The threat against U.S. elections by Russia and other foreign powers is far greater today than it was in 2020, the chairman of the Senate Intelligence Committee said on Tuesday.

Senator Mark Warner, the Virginia Democrat who leads the committee, said the danger had grown for multiple reasons: Adversarial countries have become more adept at spreading disinformation, Americans are more vulnerable to propaganda, communication between the government and social media companies has become more difficult and artificial intelligence is giving foreign powers new abilities…

“With polarization in this country, and the lack of faith in institutions, people will believe anything or not believe things that come from what used to be viewed as trusted sources of information,” Mr. Warner said. “So there’s a much greater willingness to accept conspiracy theories.”

Vulnerability to influence operations, Mr. Warner said, is not confined to the United States. In Slovakia, for example, Russian information operations influenced views of Russia’s war in Ukraine.

“Slovakia was 80 percent pro-Ukraine,” he said. “Two years later, with massive amounts of Russian misinformation and disinformation, you have a pro-Russian government and 55 percent of Slovaks think America started the war in Ukraine.”

These statements from Warner aren’t shocking even in the slightest. If you look at social media now, it’s filled with disinformation from self-proclaimed Americans, both on the left and right of the political spectrum. Left-wing nuts continue to push pro-China, pro-Iran propaganda in the name of “progressivism,” while right-wingers brazenly post pro-Kremlin, anti-Ukraine rhetoric. As I’ve written before, I don’t think these are actually Americans, but rather, I surmise the ideas from Russian and Chinese bots have entirely taken over the minds of Americans, including those in Congress. Young Americans are more likely than ever to distrust institutions and their government — and rightfully so — but they are also more likely to subscribe to foreign propaganda that advances flawed ideologies.

Russia and China continue to flood social media websites with fictitious misinformation to influence the 2024 election. The two countries employ bots, and sometimes even human labor, to publish websites full of wrong information, social media posts expressing dissatisfaction with the current administration, and launch advertising campaigns to create distrust in the U.S. government. These tactics prove themselves every day to be overwhelmingly successful; a message disguised as an “America First” endorsement is much more likely to be listened to than one directly opposing American military efforts overseas. A Russian operative asking “Why are we sending our money to Ukraine when we should be securing our southern border?” has a more striking effect on right-leaning Americans than “We should stop sending money to Ukraine because it’s none of our business.”

These operations are not covert or minuscule in scale — they’re entirely widespread on the internet today, on social media websites like X and Threads. People with “radical” political views might not actually be expressing any beliefs at all — instead, they are probably just a Russian asset. My advice is not to interact with these foreign influence accounts whatsoever, and I further demand that social networks like Meta take more action to combat misinformation and perform a mass deletion of spam accounts with outrageous beliefs. This is happening in the United States, in Europe, and in many Asian countries, and internet citizens must be more vigilant in reporting it and combating the spread of dangerous propaganda that has the power to threaten our respective democracies.

Rivian Launches EV Charger Reliability Grades

Andrew Hawkins, reporting for The Verge:

Rivian is pushing a new software update that will give its customers better insight into which EV chargers to visit — and which to avoid…

“Our North Star is charging and trip planning in EVs should just work,” Wassym Bensaid, Rivian’s head of software, told The Verge. “You should not think about it.”

I had the chance to test out Rivian’s new software update during a recent road trip in an R1S SUV. Inputing a destination brought up dozens of chargers on the vehicle’s navigation, each of which displayed a letter grade. An “A” grade is a sign that the charger was in good working condition, while an “F,” well, speaks for itself.

“Surprisingly, actually, there’s multiple chargers rated F,” Bensaid said. “That was one of the ‘a ha’ moments as we went through the data.”

The new ranking system is determined by a host of data collected by Rivian’s customers, Bensaid said. Each vehicle is connected and constantly sending data back to the company’s headquarters, which then gets processed to remove “noise” that’s not essential to the decision-making algorithm.

This is extremely clever. While Tesla Superchargers are notoriously reliable and display stall information in Tesla vehicles’ infotainment systems, that functionality isn’t available for other electric vehicle manufacturers to integrate themselves, even though Rivian and other automakers have recently opened support for Tesla Superchargers, working with Tesla to develop adapters for their vehicles. (Rivian’s integration launched in March.) This new Rivian software feature, however, not only adds reliability information to the map in the vehicle itself from Tesla Superchargers, but also collects information from other brands, like Electrify America, the United States’ largest DC fast charging network outside of Tesla’s Superchargers.

Electrify America stalls are often plagued with reliability issues, and Electrify America itself doesn’t have the ability to monitor how its chargers are operating. The only way for customers to check if an Electrify America unit is functioning is to drive to one and hope for the best. Rivian’s new software system will automatically collect analytics from the chargers whenever a Rivian driver charges at one; after enough users have charged at a destination, results will appear for all other drivers through Rivian’s in-car map. While this information isn’t real-time, per se, unlike Tesla’s feature which automatically notifies drivers if a stall is broken, it is better than going to a charger without knowing anything about its reliability.

Hawkins points out that this software is limited due to how few Rivian cars traverse the roads of the United States, which is somewhat detrimental to the usefulness of the feature. Underused chargers located in rural areas are significantly less likely to ever be touched by a Rivian driver, let alone however many it requires for the software to begin calculating reliability information for it, so only busy chargers in metropolises will benefit from the software update. I still think that is better than nothing, but it also adds pressure on Electrify America and ChargePoint, another EV charger brand, to add public uptime data via an application programming interface or integrations with carmakers for them to integrate the statistics into their vehicles.