Apple’s suite of AI tools is here. How will it change how people use their devices?

An image of various Apple Intelligence features running on Apple devices. Apple Intelligence. Image: Apple.

Apple on Monday announced a new suite of artificial intelligence features at its Worldwide Developers Conference, held from its Apple Park headquarters in Cupertino, California. The new features, together called “Apple Intelligence,” allow users to summarize articles, emails, text messages, and notifications; improve and generate new writing in system-wide text fields; pull data from across their apps like Mail, Photos, and Contacts to power a wide range of natural language processing features; and interact with a new version of Siri, which can now be typed to and can perform actions within apps using an improved version of a technology called App Intents.

It also allows users to generate new AI images and emojis with features like “Genmoji” and “Image Playground” integrated into Messages and other third-party apps, as well as have AI create videos of photos coupled together with motion effects and music — a feature called “memory movies.” Users can also remove unwanted objects from the background of photos, search their libraries using natural language, and edit images with effects and filters automatically. Apple Intelligence runs both on-device and in the cloud depending on what Apple’s internal logic believes is necessary for the task. It leverages a breakthrough called Private Cloud Compute, utilizing the security of Apple silicon processors to handle sensitive user data — ensuring it remains end-to-end encrypted. Private Cloud Computer servers run an operating system that can be inspected by outside security researchers, Apple said, via software images that can be verified to ensure they are the ones running on Apple’s servers. Greg Joswiak, Apple’s marketing chief, said the servers run on 100 percent renewable energy. These servers were easily the most intriguing technical demonstration of the day.

Apple also announced a partnership with OpenAI to bring ChatGPT, its flagship large language model, to iOS 18, iPadOS 18, and macOS 15 Sequoia — the new operating systems coming to Apple devices this fall — via Apple Intelligence, powering general knowledge queries and complicated creative writing assignments Apple deems are too intensive for its own LLMs, both in the cloud and on-device. The integration — also coming in the fall — does not build a chatbot into the operating systems, but rather is used as a fallback for Apple Intelligence when it needs to search the web or generate more lengthy pieces of text. When ChatGPT is used, a user’s IP address is obscured and Apple makes the call to ChatGPT directly, asking a user to confirm if it is OK to use the external service to handle the query. Apple stressed that the feature would be turned off by default and that no personal data would be handed over to ChatGPT, a marked difference from its own foundation models. It also announced that more models would become available soon, presumably as the company signs more contracts with other AI makers, such as Google.

Together, the new features, which will be enabled in the fall for beta testers, finally catch Apple up to the AI buzz that has engulfed the technology industry since the launch of ChatGPT in November 2022. Investors have quizzed Tim Cook, Apple’s chief executive, on every post-earnings call since then about when Apple would join the AI frenzy, and now, its answer is officially here. Apple Intelligence does things differently, however, due to the ethics of who it’s made by: Apple Intelligence focuses on privacy and on-device intelligence more than fancy gimmicks other tech companies like Google and Microsoft have launched. Yes, by adding AI to its flagship operating systems used by billions around the world, Apple becomes vulnerable to hallucinations — phenomena where chatbots confidently provide incorrect answers — and involves itself in the difficult business of content moderation. But it also sets a new gold standard for privacy, security, and safety in the industry while bringing novel technology to its widest audience yet.

That being said, no technology comes without reservations. For one, Apple Intelligence’s Image Playground features look cheaply made, generating poor-quality images that most artists would rather do without. The systems will also easily be subjected to abuse by their users, including being asked to synthesize illegal, sexually explicit, and immoral content that Apple Intelligence may be tricked into creating even if prohibited by Apple. But Apple has said that it has thought of these issues: In response to a question from John Gruber, the author of Daring Fireball, Apple executives said Apple Intelligence isn’t made to be a general-purpose AI tool as much as it is a personal assistant that uses people’s personal data to provide helpful, customized data and answers. One example a presenter demonstrated onstage was the question, “When should I leave to pick up my mom from the airport?” Siri, in this case, was able to surface the appropriate information in Messages, track the flight, and then use geolocation and traffic data to map directions and receive the estimated travel time. Apple Intelligence is not meant to answer questions about the world — it’s intended to act as a companion in iOS and macOS.

Apple Intelligence has one glaring compromise above all, though: It only works on iPhones 15 Pro or later, iPads with the M1 chip or later, and Apple silicon Mac computers. The narrow compatibility list will inevitably cause furor within broader communities outside of the tech media, with cynicism that Apple artificially created the limitation to boost sales of new devices already spiraling on social media — but the reason for why this bottleneck exists is rather simple: AI requires significant computing power. Intel Macs don’t have neural processing units called “Neural Engines” specialized for LLMs, and older iPhones — or current-generation iPhones with less powerful processors — lack enough “grunt,” as John Gianandrea, Apple’s machine learning chief, put it Tuesday at “The Talk Show,” live from WWDC. Add to that the enormous memory constraints that come with running an entire language model on a mobile device, and the requirement begins to make sense: When an LLM needs to answer a question, the whole model — which can be many gigabytes in size — needs to fit in a computer’s volatile memory.

After mulling over the announcements from Monday for a few days, I have thoughts on each of the integrations and how users might use them. I think Monday was one of the most impressive, remarkable, and noteworthy developer conferences Apple has hosted in recent years — at least since 2020 — and while I haven’t tried Apple Intelligence yet, I’m very intrigued to learn more about its capabilities and how it will shape the nascent future of Apple’s platforms. Here are my takeaways from the Apple Intelligence portion of Monday’s keynote.


Siri and App Intents

An image of the new Siri running on a variety of Apple devices. The new Siri. Image: Apple.

Siri finally received a much-needed update, further integrating the assistant within the system and allowing it to perform actions within apps. The new version of Siri uses “richer natural language understanding,” powered by Apple Intelligence, to allow users to query the assistant just as they would a person, adding pauses in speech, correcting mistakes, and more. It also can transform into what is essentially an AI chatbot by allowing users to type into a text field by double-tapping at the bottom of their iPhone or iPad screen, featuring a new, rounded interface and animation that wraps around the device’s bezel and using Apple Intelligence to parse questions. Siri also knows exactly what is on the screen of someone’s device at a given moment; instead of having to ask Siri about a particular show, for example, a user can ask: “Who stars in this?” If a notification pops up, Siri knows of its contents and can perform actions based on the newfound context.

Siri now utilizes personal information from all apps, adding emails, text messages, phone call summaries, notes, and calendar events — all information stored on iCloud or someone’s phone — to what amounts to a knowledge graph part of the foundation models’ training data, which Apple calls the Semantic Index. This information is used as personal context for Siri, and any app can contribute its data to the context pool. The current version of Siri in iOS 17 does perform searches, but those searches are only keyword-based, i.e., if someone asks for a specific detail from an old text message thread, Siri wouldn’t be able to find it. The new version leverages its own intuition to search through user-generated content, going beyond basic regular expressions and keywords and using semantic searches instead. Additionally, Apple Intelligence can use its summary capabilities to catch users up on messages, emails, and notes, similar to the Humane Ai Pin and Rabbit R1’s ambitions.

The most remarkable new feature is Siri’s ability to take action in apps. Using a technology called App Intents, which exposes actions from apps to the system, Siri can use a prompt to decide what actions to run without intervention from a user. Because Siri has the advantage of personal context, it already knows what data is available to be acted upon, so if a user wants to, say, send a note made earlier as an email, they can simply instruct Siri to do so without having to name the note or where it is located in the system, such as what app it’s in. Siri also uses its vision capability to use what is on the screen as context — a user can ask Siri to fetch a particular photo simply by describing it, and then ask for it to be inserted into the current document. It’s a perfect example of “late but still great” that Apple perfectly achieves: Apple is combining four features — LLMs, personal context, on-screen context, and App Intents — into one without even notifying the user of each step. It’s nothing short of magic.

Developers of apps that belong to any category in Apple’s predefined list — examples include word processing, browsing, and camera apps — can add App Intents for the Apple Intelligence-powered version of Siri to use with some modifications to their code, just as they would to add support for interactive widgets or Shortcuts. Somewhat interestingly, apps that aren’t part of Apple’s list aren’t eligible to be used with the new Apple Intelligence version of Siri. They can still expose shortcuts to Siri, just as they did in previous versions of Apple’s operating systems, but Siri will be unable to interface with other apps to perform actions in one step. Apple says it’ll be adding more app categories in the coming months, but some niche apps inevitably won’t be supported at all, which is a shame. Skimming the rumors over the past year, I expected Apple would be using a more visually focused approach, learning the behavior of user-facing buttons and controls within apps, but Siri’s actions are all programmatic.

Either way, the new version of Siri amounts to two things: an AI chatbot with a voice mode, and a “large action model.” That combination will sound familiar to keen observers because it’s exactly what Rabbit aimed to achieve with the R1 in April — except that time, it “relied” heavily on vision to learn the user-facing graphical user interfaces of websites to perform actions on behalf of users. (It didn’t do that — it was a scam.) Apple, in contrast, has constructed a much more foolproof solution, but one that will also inevitably be neglected by large app developers for an indefinite amount of time. Here’s why: Developers who integrate App Intents will notice that the amount of time people spend in their apps will drop significantly because to do that is inherently the entire point of virtual assistants. Large developers owned by corporate giants see that as the antithesis of their existence on the App Store entirely — they’re there to make money and advertise while tracking users, and Apple’s latest technology will not let them accomplish that central goal.

For the few apps that support it, it’ll feel like true magic, because in many ways, it is magic. It’s not Apple’s fault: This is just the cost of doing business with humans rather than robots — humans have their own thoughts about how they want to conduct trade, and those thoughts will clash with Apple’s ideas, even if Apple’s approach is more beneficial to the user. For Apple’s apps, which most people use anyway, the new version of Siri will, for the first time in Siri’s 13-year-long career, feel intelligent and remarkable. Just hearing about it makes me excited because of how much technical work went into combining each of these features into harmonic software bliss. But Apple also did what Apple, at times, unfortunately, always does: it put the onus on developers instead of itself. Apple and its users will ask why app developers won’t support true magic because it is magic, but, getting down to brass tacks, the answer is clear: money. When taking into account the greediness of the world’s largest app developers like Meta and Google, I have a tough time imagining this portion of Apple Intelligence will thoroughly change how people use their devices.

What will make a difference in the way people interact with their devices is the chatbot capabilities of Siri alone. Because Siri is now powered by LLMs and the Semantic Index, it’s naturally much smarter. No more will Siri be unable to understand simple questions due to its prior, now current, inability to map complicated, human-like sentences to its corpus of knowledge because it will soon have added context. For example, if someone wants to know what is on their screen — say, they just want to look it up — they can double-tap the bottom of their screen and ask Siri. Siri can then send it to someone, add it to a note, or add it to a note and send it to someone all in one step. It’s an AI chatbot, similar to ChatGPT, except it’s more focused on answering personal questions rather than general knowledge ones. When Siri does need to connect to the internet, as often as it does to answer people’s myriad curiosities, it can either perform a normal web search or integrate with ChatGPT.

By bringing ChatGPT — not its chatbot interface, as leakers have speculated, but just the model1 — into Siri, and by extension, the entire system, it becomes genuinely intelligent. There’s no need to be thrown into an external app or interface because ChatGPT’s answers appear inline, just like other Siri answers from previous versions of iOS, but this time, those results are personalized, useful, and link to the web only when necessary. ChatGPT almost certainly will hallucinate, but (a) Apple provides an on-screen warning when connecting with ChatGPT which states sensitive information should be double-checked manually, and (b) that is simply the limit of this technology in 2024. OpenAI may cut down on hallucinations in the future, probably as part of a new GPT-5 model, but for now, Apple has done everything that it can to make Siri as smart as possible.

Siri will continue to make web searches, but as the web gets worse, the best hope for finding information effortlessly is ChatGPT. Coupled with personal context, having an Apple-made chatbot built into every iPhone in the future will be a feature many millions of people will enjoy. With Apple Intelligence, Apple has fully realized Siri’s potential — the one it architected in 2011. Siri is no longer just an “assistant” unable to understand most human queries while deflecting to Bing anymore. It is the future of computing, a future start-ups like Humane and Rabbit have been trying to conquer before Apple single-handedly put them to shame in two hours on a Monday. While Apple won’t call it a chatbot, it’s an Apple chatbot, building in the privacy and security Apple customers come to expect from Cupertino, all the while enabling the future of computing. This, without a doubt, is the most groundbreaking component of Apple Intelligence.


Summaries

An image of priority notifications in iOS 18. Priority notification summaries in iOS 18. Image: Apple.

One of the tasks in which LLMs typically succeed is summarization of text, so long as the wall of information fits within the model’s context window. Naturally, Apple has added summarization features to every place in its operating systems imaginable, such as Mail, Notes, Messages, notifications, and Safari. These blurbs are written by Apple’s own foundation models, which Cook, Apple’s chief executive, has said have near a 100 percent success rate, and so Apple doesn’t even bother with adding labels to summarized content. Gianandrea, the Apple ML chief, told Gruber on “The Talk Show” that Apple will also be more permissive in content Apple Intelligence summarizes: While Apple Intelligence will refuse to generate illegal or explicit content, it will not refuse to summarize content it has already been given, even if that content goes against Apple’s creation guidelines. I find this relieving: If a user provides questionable material to ChatGPT and asks it to summarize or rewrite it, for example, it will refuse even when it shouldn’t. AI researchers, such as Gianandrea, work to minimize these so-called “refusals,” which will make the models more helpful.

In Mail and notifications, Apple Intelligence enables new “priority” summaries, handpicking conversations and notifications the system deems important. For example, instead of just showing the first two lines of an email in Mail — or the subject — Apple Intelligence will condense the main points of the correspondence into a sentence that provides just enough information at a glance. It’ll then surface the most important summaries, perhaps from a user’s most important contacts or crucial alerts from companies, at the top of the inbox, complete with an icon indicating that the message has been summarized. Mail will also categorize emails, similar to Gmail, into four discrete sections at the top of the inbox for easy organization. Notifications also receive the same treatment, with priority notifications summarized and placed at the top of the notification stack. If someone sends multiple text messages in a row, for example, they will be condensed and placed in the summary. These small additions will prove handy, especially when a user is away from their devices for a while. I’m a fan.

The same summarization of notifications is also used to power a “Minimize Distractions” Focus, which is offered alongside Do Not Disturb. While Do Not Disturb, by default, silences all notifications, Minimize Distractions queries Apple Intelligence to take into consideration the content and context of a notification to determine if it is important enough to break through the filter or not. While I assume users will be able to manually select contacts and apps that’ll always remain whitelisted, similar to any other Focus, the system does most of the work in this mode. When Apple Intelligence surmises a notification is important, it will label it as “Maybe Important,” akin to “Time Sensitive” labels in current versions of iOS. Messages labeled “Maybe Important” will be summarized and grouped automatically, parallel to “priority” notifications. I think Minimize Distractions should be the new default Do Not Disturb mode for most people — it’s versatile, I think it’ll work well, and it lifts the burden of customizing a Focus from the user to the operating system.

Mail, Phone, and Notes also now feature summarizations at the top of conversations. In Mail, a Summarize button can be tapped to reveal a longer summary — roughly a paragraph — and in Notes and Phone, users can now record a call to generate a summary after it’s over in the Notes app. Without a doubt, the latter feature will be used to create text-only notes for personal use because many jurisdictions require both parties of a call to consent to a recording (this is why iOS has prohibited call recording since its introduction), but I think the feature is clever, and it’ll come in handy for long, information-dense calls. Also in Mail, Smart Reply will scan emails for questions, then prompt a user to answer each one so they don’t miss an important detail. These prompts are in the form of Yes/No questions presented in a modal sheet, and tapping on a suggestion automatically writes the answer into the email.

Safari’s summarization feature, however, is destined to be the most used: Near the Reader icon in the toolbar, users can choose to quickly summarize an article to receive the gist of it. These summaries are created through Reader Mode — the Safari view which allows users to read a clutter-free version of an article — and rely on Apple’s models to provide quick summarization. For once, it’s nice to see an AI tool that interfaces with the web and doesn’t disincentivize going to websites and giving publishers traffic. This is easily one of the best use cases for AI tools, and I’m glad to see Apple embracing it.

More broadly, the central idea of Apple Intelligence begins to crystallize in the case of its text summarization features: AI assistants — whether they be Siri, Google Assistant, Alexa — have always required active engagement to be helpful. Someone asks an assistant a question, but a good human assistant never needs to be asked for help. Assistants should work passively, helping with busy work nobody wants to do. Summarizing notifications, replacing (worthless) two-line previews in the email inbox with one-sentence blurbs, filtering unnecessary messages and whittling them down to the bare minimum, and quickly drafting call notes are all examples of Apple entering into the lives of millions to assist with tasks many don’t even know need to be done. Nobody thinks the two-line message previews in Mail are useless because, from the conception of email and the internet, that was always how they appeared. Now, there’s no need for a subject or preview where the first line is almost always a greeting — AI can make email more enjoyable and quick.

While the new Siri features are, as I said before, examples of active assistance, i.e., a user must first ask for help, Apple Intelligence is also meant to proactively involve itself in its users’ lives — and come to think of it, it’s logical. AI might flub or make up answers confidently, but so would a person; nobody would discard an email just from the summary. They’d use it to determine if it’s worth reading immediately or later. Similarly, by passively engaging users, the system decreases human reliance on AI while simultaneously making a meaningful difference in everyday scut work. This should be a core tenet of AI that other companies should make a note of — while one might think these features are just text summarization, they compose a much broader theme. Apple, chiefly, is leveraging its No. 1 advantage over OpenAI or Microsoft, that it uniquely can blend into people’s lives passively, without interruption or nuisance, while also providing a helpful service. I know the phrase gets overused, but this is something only Apple could do.


Writing Tools

An image of the writing tools menu in macOS 15 Sequoia. Writing Tools in macOS 15 Sequoia. Image: Apple.

Apple continued its practice of “sherlocking”2 by practically adding a supercharged version of Grammarly into every system-wide native text field in iOS and macOS. What Apple means by “native text field” is unclear, but I have to assume it’s referring to fields made with Apple’s own developer technologies for writing text. Examples presented onstage as supporting Writing Tools, the suite of features, include Bear, Craft, and Apple’s own Pages, Notes, and Keynote. The suite encompasses a summarization tool for users to have their own text summarized, as well as tools to write key bullet points and create tables or lists out of data in paragraph form — a feature I think many will find comforting because of how arduous graphs and tables can be to put together. The two grammar correction features allow users to have the system proofread and rewrite their text — both tools use the language models’ reasoning capabilities to understand the context of the writing and modify it depending on a user’s demands.

One humorous example Apple presenters highlighted onstage was rewriting a résumé more professionally when it was originally casual, but it perfectly illustrated the benefits of having a system-wide, contextually aware writing assistant within cursor’s reach. The proofreading feature underlines parts of the writing that may have grammar mistakes, similar to Grammarly, and suggests how to correct them — Federighi highlighted how all suggestions can be accepted with just one tap or click, too. If none of the pre-made suggestions in Writing Tools are applicable, a user can describe what kind of changes they’d like Apple Intelligence to make using the “Describe your change” item at the top of the menu, which launches a chatbot-like interface for text modifications. The feature set seems well thought-out, and I think it’s a major boon to have a smart, aware grammar checker built into operating systems used by billions.

While Apple’s foundation models — which run on-device and in the cloud via Private Cloud Compute depending on the complexity and length of the text, I surmise — are programmed to assist with modifying already user-generated writing, ChatGPT was demonstrated as able to write stories and other creative works with just the click of a button and prompt in the Writing Tools pane. People who use Apple devices shouldn’t have to go to the ChatGPT app or website anymore to have OpenAI’s chatbot write something or help them conduct research because it’ll be built into the system. I think this is the most useful and clear example of Apple’s ChatGPT introduction shown in the keynote. Apple is opaque with when it is sending a request to ChatGPT; even if a user explicitly asks for ChatGPT to handle the query, the system prompts them one more time to confirm and tells them that ChatGPT’s work may have errors due to hallucinations. Still, I think this specific, intentional integration is more helpful than building a full-on GPT-4o interface into iOS, for instance.

Apple evidently wants to draw a boundary between ChatGPT and its own foundation models while concurrently having the partnership jibe well with the rest of its features. It doesn’t feel out of place, but it’s easily an afterthought; I could envision Apple Intelligence without OpenAI’s help easily. Still, with all of its down-ranking, OpenAI seems more than willing to trade providing free services to Apple customers with the exposure that comes with its logo appearing in front of billions. OpenAI wants to be to generative artificial intelligence what Sharpies are to permanent markers, and since Google is the company’s largest competitor, it’s working on a “the enemy of my enemy is my friend” philosophy. As I’ve said before, OpenAI seems to be in the “spend venture capital like it doesn’t matter” phase of its existence, which is bound to be time-limited, but for now, Apple’s negotiators stroke an amazing deal — free.

Part of me wants to think ChatGPT isn’t Apple Intelligence, but nevertheless, it is — it just happens to be a less-emphasized part of the overall package. I don’t mind that: In fact, I’m impressed Apple is able to handle this much of the processing by itself. In fact, I’m almost certain based on what has been shown this week that Apple will soon3 drop OpenAI as a partner and go all-in by itself once it’s able to generate full blocks of text by itself, something it currently is not very confident in. But since Apple has offloaded the pressure of text generation, it has also coincidentally absolved itself of the difficult task of content moderation. As I wrote earlier in this article, Apple Intelligence will not refuse to improve a text, no matter how egregious or illegal it may be, because Apple understands that it is not the fault of the chatbot if the user decides to write something ostentatious. I favor this approach, and while some naysayers might blame the company for “rogue” responses, I think the onus should be placed on the prompters rather than the robot. If ChatGPT was given the task of summarizing everything a user wrote, it would fail, because the safety engineering is hard-coded into the model. With Apple’s own LLMs, it isn’t.


Image Playground and Genmoji

An image of the Image Playground app running in iPadOS 18. The Image Playground app in iPadOS 18. Image: Apple.

In the last section, I commended Apple for taking a more laissez-faire approach to content moderation, something I usually wouldn’t commend a technology giant for. I think it is the responsibility of a multi-trillion-dollar corporation like Apple to minimize the social harm its products can do, which is why I’m profoundly both repulsed and irritated by its new image generation features, called Image Playground and Genmoji. Both features are similar in that they (a) primarily handle prompting, i.e., they write a detailed prompt from the user’s simple request for the AI image generator; and (b) refrain from creating human-like imagery for its high susceptibility for misuse. Both features are available system-wide but were primarily advertised in Messages due to their expressiveness, which leads me to believe that Apple felt pressured to create an image generation feature and thought of a semi-sensible place to put it last minute. While Genmoji — terrible name aside — was leaked by Mark Gurman of Bloomberg earlier, Image Playground is novel, and information about it is scarce.

Genmoji — a portmanteau of “generated” and “emoji” — generates AI emojis based on a user’s prompt, then renders it as any text to fit in with other words and emojis in a text message. I believe these synthetic emojis are only available in Messages because they aren’t part of the Unicode emoji standard, so Apple has to do the work to make them render properly and fit within the bounds of text as part of its own proprietary iMessage protocol. If a person sends a Genmoji to an Android user, they will receive it as a normal image attached to the text message. A user can describe any combination of existing emojis, or even new ones entirely, such as a giant cucumber. Genmoji can also be used to create cartoon-like images of people one has in their contacts, so a user can ask for a contact “dressed like a superhero,” for instance. Genmoji typically creates a few icons from a prompt so a user can choose which one they’d like to use.

Image Playground is Apple’s version of DALL-E from OpenAI or Midjourney: Users can create a “novel” image based on their description and choose from a variety of prompt suggestions that appear outside of a unique colorful bubble interface surrounding the generated photo. The feature is verging on a one-to-one copy of other AI image tools on the market, but perhaps with a more appealing, easy-to-use interface that suggests additions to prompts proactively. Users can also choose themes, such as seasons, to further customize the image — from there, they can save it to Photos, Files, or copy it. Image Playground isn’t limited to Messages and can be integrated into third-party apps via an application programming interface Apple has provided developers. There is also a dedicated Image Playground app that will be pre-installed on iOS devices for people to easily describe, modify, generate, and share AI images. Users can also circle pictures they’ve drawn and turn them into AI-generated pieces with a feature called Magic Wand, which is first coming to Notes. Like Genmoji, images made using Image Playground can resemble a person depending on data derived from personal photos.

The entire concept of AI-generated photography is abhorrent to me and many others, especially those who work in creative industries or who draw artwork themselves. While Apple has negated the safety concerns that arise from AI-generated artwork — the four pre-defined styles are intentionally not photorealistic, and each image has internal metadata indicating it is generated by AI — it has not put to ease concerns from artists alarmed by AI’s cheapening of the arts industry. Frankly, AI-generated artwork is disturbing, unrealistic, and not elegant to look at. It looks shoddily designed and of poor quality, with lifeless features and colors. If AI images looked like people had made them, a different problem would be at the forefront of the conversation, but currently, AI images are cheap, filthy creations. They’re not creative; they instead disincentivize and discourage creativity while inundating the internet with deceptive photos that trick people and feel spammy and artificial.

It’s tough to describe the feelings AI images cultivate, but they aren’t pleasant. And furthermore, to add even more insult to injury, Apple hasn’t provided any information as to how its models were trained, leaving open the possibility that real artists’ work was used without permission.4 I expect this kind of behavior from companies like OpenAI and Google, who have both consistently degraded the quality of good artwork and information almost habitually, but not from Apple, whose late founder, Steve Jobs, proclaimed Apple was at the intersection of technology and liberal arts. The company has slowly but surely drifted away from those roots that made it so reputable in the first place, and it’s disheartening to observe. AI-generated art, whether it be presented in a cute bow and ribbon or a desolate webpage littered with obnoxious advertisements, is neither technology nor liberal arts — it is slop, a phrase that at this rate should probably win Word of the Year.

I’m less concerned about the social justice angle many have seemed to stake their beliefs in and more about the feelings this feature creates. Apple users, engineers, and designers all share the conviction that software should be beautiful, elegant, and inspiring, but oftentimes, the wishes of shareholders eclipse that unwaveringly essential ideal. This is one such occurrence of that eclipse — a misstep in the eyes of engineers and designers, but a benison to the pockets of investors. Apple has calculated the potential uproar within a relatively and probably measurably minor slice of its user base isn’t worth it in favor of the deep monetary incentives, and it worked for the C-suite executives. Will Image Playground and Genmoji change the way people use and feel about their devices? Possibly, maybe for the best, or maybe for the worse — but what it will do with resolute certainty is upend the value of digital artwork.


Photos

An image of the Photos app in iOS 18. The Photos app in iOS 18. Image: Apple.

Apple, alongside all of its image generation efforts, also brought updates to photo editing and searching, similar to Google in May. Users can search their photo libraries by “describing” what they’re looking for using natural language: This differs from Apple’s current implementation where users can search for individual items like lakes, trees, etc., because now people can combine multiple queries and refine searches by adding specific details. Think of it as a chatbot that can use visual processing to categorize photos, because that’s exactly what it is. People can also generate videos called “memory movies,” short clips made from specific moments created by AI, typically complemented with music and effects. The Photos app already creates Memories, which are similar, but this time, users can describe exactly what they’d like the video to be of. Examples include trips, people, or themes from images.

The most appreciated feature ought to be the Clean Up tool, which works exactly like Google’s Magic Eraser, which first debuted with the Pixels 6 and 6 Pro in 2021. Apple Intelligence automatically identifies objects and people in the background of shots that might be distracting and offers to remove them automatically from within the Photos app. Users can then circle the distraction and the image will be recreated just as if it weren’t there. Notably, this does not compete with Adobe’s Generative Fill or other similar features — it doesn’t create what wasn’t already there. As I wrote earlier, Apple’s features aren’t whiz-bang demonstrations, they’re practical applications of AI in the most commonly used apps. I’d assume these features will be powered solely by on-device processors, but they work on photos taken on any camera, not just an iPhone.

Unlike photo generation, photo editing is an area in which generative AI can assist with the more arduous work. Photoshop has been able to remove objects from the backgrounds of photos for decades, but it requires skills and a large, powerful computer. Now, those powerful computers are in the pockets of millions, and thus, there is no need to learn these skills except for when the result truly matters. For the smallest of touch-ups, so many people are going to be empowered by having an assistant that can perform these tasks automatically. Finding photos has always been hard, but now, Apple has essentially added a librarian to the photo library. Editing photos previously required skill and know-how, but now, it’s just one tap. It’s little things like these that make the experience of using technology more delightful, and I’m glad to see Apple finally embracing them.


What Apple announced on Monday might not sound revolutionary at first glance, but keen observers will realize that the announcements and their features change how people use their devices. Technology shouldn’t do my artwork and writing for me so I can do the dishes — it should do the dishes so I can do my writing and artwork. Apple Intelligence isn’t doing anyone’s dishes yet, but it’s one step closer: It’s doing the digital version of the dishes. Apple Intelligence subtly yet conspicuously weaves itself into every corner of Apple’s beloved operating systems for a reason: people shouldn’t have to learn how to use the computer; the computer should learn from the user. For the first time ever, Apple’s computers are truly intelligent. Yes, I believe the company has misstepped in certain areas, like its image generation features, but the broad, overarching theme of Monday was that the computer is now learning from humans. The intelligence no longer lives in a browser tab or an app — it’s everywhere, enveloped in the devices we carry with us everywhere. The future is now, or, I guess, whenever Apple Intelligence goes into beta later this year.


  1. Apple said ChatGPT Plus subscribers can sign in with their accounts to gain access to quicker, better models. As I’ve said earlier, this partnership feels a lot like Apple and Google’s deal to bring Google Search, Maps, and YouTube to the iPhone. ↩︎

  2. “Sherlocked”: “The phenomenon of Apple releasing a feature that supplants or obviates third-party software…” ↩︎

  3. I don’t have a timeline for this prediction, but I believe it’ll happen within the next few years, especially if OpenAI demands payment when it runs out of VC money. That time is coming soon, and I think Apple will be ready to ditch both Google Gemini — if it adds it in the first place; Federighi didn’t confirm anything — and ChatGPT as soon as it owes either company enormous royalties. Apple wants to be independent eventually, unlike with search engines. See: iCloud Mail or Apple Maps. ↩︎

  4. Apple says Apple Intelligence was trained on a mix of licensed and public data from the internet. That public data most likely includes most websites since the user agent to disallow was only made public after Monday. Dan Moren of Six Colors wrote about how to disable Applebot-Extended on any website to prevent Apple from scraping its contents. ↩︎