No. Just no.

An image of the Pixels 9 and 9 Pro from the back. The Pixels 9 and 9 Pro. Image: Google.

Google on Tuesday from its Mountain View, California, headquarters announced updates to its Pixel line of smartphones: the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, and Pixel 9 Pro Fold. The Pixel 9 Pro is the newest form factor of the three, catering to power users who want a smaller phone for easier reachability and portability, while the Pixel Fold has now been renamed and updated to sport more flagship specifications and a new size, bringing it more in line with Google’s other flagship mobile devices. The new phones all are made to bring Google “into the Gemini era” — which sounds like something pulled straight from the Generation Z vernacular — adding new artificial intelligence features powered by on-device models running on the new Tensor G4 custom system-on-a-chip added to all of Tuesday’s new phones.

Some of the AI features are standard-issue in the modern age and are reminiscent of Google’s competitors’ offerings, like Apple Intelligence. Gemini, Google’s large language model and chatbot, can now integrate with various Google products and services, similar to Google Assistant. It’s now deeply built into Android and can be accessed quickly with speedy processing times and multimodality so the LLM can see the contents of a user’s screen. “Complicated” is not a descriptive enough word to describe Google’s AI offerings — this latest flavor of Gemini uses the company’s Gemini 1.5 Nano with Multimodality model, first demonstrated at Google I/O, its developer conference, earlier this year. Some features are exclusive to Gemini Advanced users because they require Gemini Ultra; Gemini Advanced comes included in a subscription service called Google One AI Premium. The entire lineup is a mess, and tangled in it is the traditional Google Assistant, which still exists for users who prefer the legacy experience.

But cutting-edge buyers will most likely want to take advantage of Gemini built into Google Assistant, which is separate from the Gemini web product alternatively available in the Google app. While the general-purpose Gemini chatbot has access to emails and other low-level account information, it doesn’t run on-device or have multimodality, so it cannot access what is on a user’s screen or access Google apps. One of the examples Google provided on Tuesday was a presenter opening a YouTube video and asking Gemini to provide a list of foods shown in the video. Another Google employee showed cross-checking a user’s calendar with concert dates printed on a piece of paper. Gemini was able to transcribe it using the camera, check Google Calendar, and provide a helpful response — after failing twice live during the demonstration. These features, confusingly, are not exclusive to the new Pixel phones, or even Google devices at all; they were even demonstrated using a Samsung Galaxy S24 Ultra. But I think they’re the best of the bunch and what Google needs to compete with Apple and OpenAI.

Another one of these user-personalized yet non-Pixel-exclusive features is Gemini Live, Google’s competitor to ChatGPT’s new voice mode from May, which is yet to even fully roll out. The LLM communicates to users in one of 10 voices, all made to sound human and personable. Gemini Ultra, unlike the Android Gemini features with multimodality, runs in the cloud via the Gemini Ultra model, Google’s most powerful offering. The robot can be interrupted mid-sentence, just like OpenAI’s, and is meant to be a helpful companion that doesn’t necessarily rely on personal data and context as much as it does general knowledge. In other terms, it’s a version of Gemini’s web interface that speaks instead of writes, which may be helpful in certain situations. But I think Google’s voices — especially the ones demonstrated onstage — sounded more robotic than OpenAI’s, even though the ChatGPT maker’s main voice was rolled back for sounding too similar to Scarlett Johansson.

In videos shot by the press, I found the chatbot unlikely to rely on old chat history, as well: When it was asked to modify an earlier prompt while reciting a previous answer, it forgot to reiterate the information it was about to give before it was interrupted. It feels more like a text-to-speech synthesizer in the same way ChatGPT’s current, pre-May voice mode does, and I think it needs more work. And it isn’t as impressive as the on-device personalized AI either, since Gemini Live isn’t meant to replace Google Assistant. It can’t set timers, check calendar events, or do other personalized tasks. This convoluted and forked user experience ought to be confusing for unsuspecting users — “Which AI tool from Google do I use for this task?” — but Google sees the multitude of offerings as a plus, offering users more flexibility and customizability.

Another feature Google highlighted was the new Pixel Screenshots app, a tool that leaked to the press in its full form weeks ago. The app filters out all of a user’s screenshots and uses a combination of on-device vision models and optical character recognition to understand the contents of screenshots and memorize where they were taken for later viewing. The interface is meant to be used as a Google Search of sorts for screenshots, helping users search text and images within those screenshots with natural language — a new twist on the age-old concept of “lifestreams.” I think it’s a really neat feature and one that I’ll miss sorely on the iPhone. I take tons of screenshots and would take more if together they built up a sort of note-taking app for images.

The more eccentric and eye-catching AI features are restricted to the latest Pixels and are focused on photography and image generation — and I despise them. I was generally a fan of Apple Intelligence’s personal context and ChatGPT’s interactive voice mode when both products were announced earlier this year, but the image generation features from both companies — Image Playground and DALL-E, respectively — have frankly disgusted me. I hate the idea of generating moments that never existed, firstly; and I also despise the cheapness of AI “art,” which is anything but creative. I don’t think there is a single potential upside to AI image generation whatsoever and continue to believe it will be the most harmful of any generative artificial intelligence technology. While AI firms race to stop users from flirting with AI chatbots, mistrust in legitimate images has skyrocketed. One is harmless fun with a few rare instances of objectophilia; the other has the potential to sway the most consequential election of the 21st century thus far.

This is not “Her,” this is real life. It doesn’t matter if people start falling in love with their AI chatbots. They’ll never take over the world.

But why would Google care? For Mountain View, it’s all about profit and maximum shareholder value. Because Google didn’t learn its lesson after creating images of racially diverse Nazis, it now has added a bespoke app for AI image generation powered by Gemini. Words cannot describe my sheer vexation when I hear the catchphrase for Gemini image generation on Pixel: “Standing out from the crowd requires a touch of creativity.” Pardon, but where is the creativity here? A computer is stealing artwork from real artists, putting it all in a giant puddle of slop, and carefully portioning out bowls of wastewater to end users. That isn’t creativity, that’s thievery and cheapening of hard work. Nobody likes looking at AI pictures because they lack the very creative expression that defines artwork. There is no talent, passion, or love exhibited by these inhumane works because there is no right brain creating them. It’s just a computer that predicts the next binary digit in the pattern based on what it has been taught. That is not artwork.

But I would even begrudgingly ignore AI imagery if it were impossible for real photographs taken via the Pixel’s camera to collide with the messiness of artificial patterns of ones and zeros. Unfortunately, it is not, because Google seems dead set on forcing bad AI down people’s throats. There is a difference between “I am not interested” and “no,” and Google hit “no” territory when it announced people would be able to enhance their images with generative AI. Take this Google-provided example: A presenter opened a photo taken of a person sitting in a grassy field, taken from an unusual but interesting rotated perspective. He then decided to use Gemini to straighten it out, artificially creating a background that wasn’t there previously, and then added flowers to the field with a prompt. That image doesn’t look like an artificially created one — it looks real to the naked eye. It isn’t creativity, it’s deception.

So what is a photograph when it comes to brass tacks? Personally, I believe in the definition of a photograph: “a picture made using a camera, in which an image is focused onto film or other light-sensitive material and then made visible and permanent by chemical treatment, or stored digitally.” No image was focused onto a lens — that photo shown in the presentation does not exist. This location with flowers and a field is nonexistent, and this person has never been there. It is a digital imagination, not lovingly crafted by an inspired human being, but by a computer that has ingested hundreds of thousands of images of flowers and fields so that it can accurately recreate one on its own. That is not a photo, or what Isaac Reynolds, the group product manager for the Pixel Camera, describes as a “memory.” That memory, no matter how it is construed in a person’s mind, is not real — it is an imagination. A machine has synthesized that imagination, but it has not and cannot make it come to reality.

The problem with these nauseating creations isn’t the fact that they’re conjuring up a false reality, because computers have been doing that for ages. I’m not a troglodyte who doesn’t understand the advancement of technology; I am fundamentally pro-AI. Rather, they dissolve — not blur — the line between fictitiousness and actuality because the software encourages people to create things that don’t exist. A copy of Photoshop is the digital equivalent of crayons and paper, whereas there is no physical analogue to a photo generation machine. If someone can’t imagine a nonexistent scene, they would never be able to create it in Photoshop; Photoshop is a tool that allows people to create artwork — but they could fabricate an idea they don’t have via Gemini. One tool is art, the other is artificial. You could use Photoshop to generate a fake image of millions of people lining up outside of Air Force Two waiting for Vice President Kamala Harris and Governor Tim Walz of Minnesota, but that is fundamentally art, not an image. But creating the same image via an AI generator is not art. It creates distrust.

Regardless of how much gaslighting these sociopathic companies do to the public, there will always be a feeling of uneasiness when generative AI conveniently mingles with real photos. The concept of a “real photo” has now all but disintegrated since the boundary between the imaginative and physical realms has withered away. If one photo is fake, all photos are fake until further information is given. The trust in photography, human-generated creative works, and digitally created work has been entirely eroded. There is no longer a functional difference between these three distinct mediums of art.

Once you begin to involve people in the moral complexities of generative AI, the idea of taking a photo — capturing a real moment in time to preserve it for future viewing — begins to erode. Let me put it this way: If a moment didn’t happen, but there is photographic evidence of it happening, is that photographic evidence truly “evidence” or is it a figment of a person’s imagination? Now assume that imagination wasn’t of a person’s. Would it still be considered as an imagination? (Imagination, noun: “the faculty or action of forming new ideas, or images or concepts of external objects not present to the senses.”) Google has been veering in the direction of blending computer-generated imaginations — also known as computer-generated imagery — with genuine photography, with its efforts thus far culminating in Best Take, which automatically merges images to create a shot where everyone in the picture is smiling and positioned correctly.

Were all of those subjects positioned and posing perfectly? No. But at least they were all there.

Enter Google’s latest attempt at the reality distortion field, minus the charisma: Add Me. The idea is simple: take a photo without the photographer, then take another photo of just the photographer, and then merge both shots. Everything I said about the field of flowers applies here: Using Photoshop to add someone into a picture after the fact makes that picture no longer a photograph per the definition of “photograph”; it is now a digitally altered image. The photographer will probably highlight that detail if the image is shared on the web — it makes for an entertaining anecdote — or the technique may occasionally be used for deception. I have no problem with art and I’m not squabbling about how generative AI could be used deceptively. But I do have a problem with Google adding this feature to the native photo-taking process on Pixel phones. These images will be shared like photos from now on, even though they’re not real. They’re not just enhanced — they’re fabricated. These are not photos, but they will be treated like photos. And again, when fiction is treated as fact, all fact is fiction.

Not all AI is bad, but the way one of the largest technology companies in the world portrays its features is important. Maintaining the distinction between fact and fiction is a critical function of technology, and now that divide effectively is nonexistent. That fact bothers me: that we can no longer trust photography as something good and real.


I think Pixels are the best Android phones on the market for the same reason I believe iPhones are the best phones bar none: the tight integration between hardware, software, and services. Google makes undeniably gorgeous hardware, and this year’s models are no exception. The Pixels 9 Pro remind me an awful lot of the iPhone’s design, with glossy, polished stainless steel edges and flat sides, but I think Google put a distinctive spin on the timeless design that makes its new handsets look sharp. The camera array at the back now takes on a pill shape, departing from the edge-to-edge “camera bar” design from previous models, and I think the accent looks handsome, if a bit robotic. (Think Daft Punk helmets.) If the Pixels 9 Pro are anything like previous models, I know they’ll feel spectacular in the hand, too. Pixels are always some of the most well-built Android phones, and since the Pixel 6 Pro, Google has added some spice to the design that makes them stand out.

The dual Pro-model variants mimic Apple’s lineup, offering both 6.3-inch and 6.8-inch models. I’m fine with the 6.8-inch size, but I wish the Pixel 9 Pro was a bit smaller, say 5.9 inches, similar to Apple’s pre-iPhone 12 standard-size Pro models. Personally, I think that’s the best phone size, and I miss it. (Also, “Pixel 9 Pro XL is a terrible name.”) The Pixel 9 is also 6.3 inches large for the most mass-market appeal.

The Pixel 9 Pro Fold has the worst name of all the devices, and it’s also nonsensical; this is only the second folding phone Google has made, not the ninth. But Google clearly wanted to highlight that the Pixel Fold and Pixel 9 Pro now essentially have feature parity — comparable outer displays, the same Tensor G4 chipsets, and the same amount of memory. The camera systems do differ, however: The Pixels 9 Pro have a 50-megapixel main sensor and 48-megapixel ultra-wide lens, whereas the Pixel 9 Pro Fold only has a 48-megapixel main camera and 10-megapixel ultra-wide. (For reference, the Pixel 9 has the same camera system as the Pixel 9 Pro, minus the telephoto lens; view The Verge’s excellent overview here.) Other than that, all three Pro models have identical specifications. I assume the reason for the downgraded cameras is space — the folding components occupy a substantial amount of room internally, so all folding phones have marginally worse specifications than their non-folding counterparts.

The Pixel Fold from last year had a unique form factor with a shorter yet wider outer screen. This year’s model resembles a more traditional design from the front, with a 6.3-inch outer display, just like the Pixel 9 Pro. To date, I think this is my favorite folding phone.

The last bits of quirkiness from Tuesday’s announcement are the launch dates: the Pixel 9 and Pro ship on August 22, the Pixel 9 Pro XL sometime in September, and the Pixel 9 Pro Fold on September 4. The Pixel 9, which has always been the best-priced mid-range Android smartphone, now gets a $100 price hike to $800, which is a shame, because I’ve always thought the $700 price was mightily competitive. It’s still a great phone for $800, but now it competes with the standard iPhone rather than last year’s cheaper model, which sells for $100 less. The Pixel 9 Pro and 9 Pro XL are at iPhone prices — $1,000 and $1,100 respectively — and the Pixel 9 Pro Fold starts at $1,800 with 256 gigabytes of storage, double that of the cheaper Pixels.

Good event, Google. Just scrap that AI nonsense, and we’ll be fine.