Mark Gurman, reporting Sunday in his Power On newsletter for Bloomberg:

Against that backdrop, Apple’s health team is working on something that could have a quicker payoff — and help the company finally deliver on Cook’s vision. The initiative is called Project Mulberry, and it involves a completely revamped Health app plus a health coach. The service would be powered by a new AI agent that would replicate — at least to some extent — a real doctor.

The idea is this: The Health app will continue to collect data from your devices (whether that’s the iPhone, Apple Watch, earbuds, or third-party products), and then the AI coach will use that information to offer tailor-made recommendations about ways to improve health.

Gurman says two things of note in this story:

  1. This product will ship in iOS 19.4 with a “Coming Next Year” badge on Apple’s website. We all know how that goes.
  2. The agent is “doctor-like” and I would assume provides some kind of important medical advice.

What a terrible idea. Apple’s business is predicated on an astonishing level of trust between it and its customers. As an off-topic example, when Apple says it’s handling user data securely, we’re inclined to believe it. But if Google said the same thing using the same phrasing as Apple, hardly anyone would trust the claims. We just trust Apple runs its artificial intelligence servers on 100 percent renewable energy. We trust Apple isn’t spying on us with Siri. We trust Apple devices don’t lead us astray and give us factually incorrect information. We trust Apple’s product timelines are accurate: software announcements in June, iPhones in September, and Macs in October.

But slowly, that reputation has been crumbling. Siri can’t even get the month of the year right. The more contextual version of the voice assistant is gone, even though it was supposed to be here weeks ago. Apple Intelligence prioritizes and summarizes scam emails and text messages. Tim Cook, the company’s chief executive, is betraying every value Apple has to donate to a fascist for a quick buck. The trust Apple customers have in Apple is eroding quickly and Apple has done nothing to get it back.

Medical data is particularly sensitive. Apple users trust the medical records collected by their Apple Watches are end-to-end encrypted and stored in their iCloud accounts, shared with nobody without prior consent. Millions of women around the world — including in authoritarian, anti-freedom regimes like the Southern United States — trust Apple to keep their period tracking data safe and away from the eyes of their governments, who wish to punish women for exercising the basic freedom to control their own bodies. And perhaps most importantly, every Apple Watch user trusts that the data coming out of their devices is mostly accurate. If their Apple Watch says they need to see a doctor because an irregular heart rhythm was detected, people go. That feature has saved lives because it’s accurate. Just a few false positives and people will begin to ignore it, but that hasn’t happened for a reason: Apple products are reliable and nearly always accurate.

But if Project Mulberry gives a factually inaccurate answer just once, Apple’s storied brand reputation is gone for good. And that’s just from the standpoint of a business executive; people could die from this technology. Sure, the latter concern hasn’t stopped other cheap Silicon Valley start-ups, but nothing really deters them from ugly business practices. Apple, on the other hand, is trusted by hundreds of millions of people to track their medical history. People will trust the Apple Health+ AI — especially elderly users who haven’t been given the media literacy training to function in the 21st century. The people most likely to trust Apple are also those who could suffer the most because of it.

I don’t trust Apple anymore. Apple Intelligence content summaries are the worst AI content I’ve seen since that AI-generated video of Will Smith eating spaghetti. I’ve never once intentionally tapped on an Apple Intelligence autocorrect suggestion in Messages. Writing Tools still removes my Markdown syntax for no apparent reason and lacks considerably compared to Grammarly. (It also crashes constantly.) Siri can’t even perform web calls to ChatGPT correctly — forget about it telling me when my mom’s flight will land. Can this company’s AI be trusted with medical data? What’s the rationale for doing so? Who’s to say it won’t mix numbers up or be susceptible to prompt injection?

People go to school for decades to become doctors; it’s not an easy career. But even if Health+ is trained by real doctors, there’s no guarantee it won’t mix up the information it’s given. This is an inherent weakness of large language models and it can’t be mitigated by just giving the AI high-quality training data. And if these models are to be run on personal computers like the iPhone, they probably won’t even be that good. Local AI models aren’t trustworthy; the ones run in massive data centers themselves tend to get things wrong. If this feature even comes out at all, Apple will tout how the training data was vetted a million times over by the best doctors to ever exist on the planet. But LLM performance doesn’t necessarily correlate with training data quality. Model performance is contingent on its size, i.e., how many parameters it has.

My guess is that Apple Health+ will probably run using Private Cloud Compute just to alleviate some of the stress that comes with factual inaccuracies, but even so, it’s still not guaranteed to provide good results. NotebookLM, Google’s AI research product, only relies on source data uploaded by a user, and it also occasionally gets things wrong. The point is that there’s no way to solve the problem of AI hallucinations until they understand their words — a technology that plainly hasn’t been invented yet. Today, LLMs think in tokens, not English. They do complex math problems to synthesize the next word in a sentence. They don’t think in words yet, and until they do, they’ll continue to make mistakes.

No matter how much work and training Apple puts into this AI doctor, it’ll never be as trustworthy as a real health professional. It’ll throw Apple’s reputation in the toilet, which, if we’re being honest, is probably where it belongs.