Apple bet its AI future on Gemini. Here’s how it can reimagine the iPhone for you

One of the biggest announcements of the tech world — and between two of the biggest tech companies on the planet — was condensed in a brief joint statement with fewer than a hundred words. Apple announced that Gemini will be powering the rebirth of Siri assistant, and the framework that will power the AI software experiences on iPhones and Macs. 

“These models will help power future Apple Intelligence features, including a more personalized Siri coming this year,” the company said. This is a huge win for Google, great news for Apple device users, and a self-admission that Apple couldn’t steer the AI race in the same capacity as Google, Meta, or OpenAI. 

The writing has been on the wall for a while. At one stage, Apple was reportedly trying Anthropic’s Claude and OpenAI’s GPT models to power Siri. But eventually, the company went with Google, which is a massive validation for Gemini’s capabilities. Let’s break down what likely comes next for the millions of iPhone users like you and me.

So, uh, privacy?

With AI, there’s one big dilemma that is hard to overlook. AI chatbots dig deeper into our lives than social media ever did. Chatbots have access to our email, calendar, gallery, files, and, of course, our day-to-day thoughts. Experts are already grappling with the rising problem of a deep human-AI emotional connection. 

But that’s not the end of it. Every time we summon an AI chatbot, data is sent to a company’s server for processing. In a healthy few cases, it is stored for model training, or for safety, and you can’t opt out of it. The solution? On-device AI. For example, Gemini Nano is an on-device approach that runs on the local silicon of your phone, or PC.

No data ever leaves your phone. But it’s slow and not as capable. For media-related chores or other demanding tasks, cloud processing is mandatory. So, are you ready for it, now that Google is powering the AI experiences on your iPhone and Mac, especially given its history? Well, Apple already has a solution for it, and it is pretty clear about privacy, now that Gemini will power AI experiences.

“Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards,” says the company. That means your data and AI interactions will only be routed through Private Cloud Compute servers, which rely on custom Apple silicon and the company’s own security-first operating system. 

“We believe PCC is the most advanced security architecture ever deployed for cloud AI compute at scale,” claims Apple. With PCC, data is encrypted as soon as it leaves your phone. And once the assigned task is completed, the user request and material shared are deleted from the servers. 

No user data is retained, and whatever lands on the cloud servers, none of it is accessible to Apple. Gemini is simply providing the intelligence to process your text or voice commands. All the work that follows is safely handled on Apple’s secure servers, instead of going to Google. 

What next? 

If you’ve ever used Gemini, and then asked Siri to handle the same tasks (and watched it fail), you will know the difference. The latest Google-Apple partnership is closing that gap. And more importantly, it’s giving Apple the fodder to offer its own unique AI experiences. 

Broadly, the underlying Gemini AI framework will enhance Siri and Apple Intelligence. How exactly? That’s unclear, because Apple won’t simply do a copy-paste job. You likely won’t be seeing any overt Gemini branding when pushing these next-gen AI features on your iPhone. 

Apple is just borrowing the brains. The body and behavior will be your usual Apple affair.  

Yet, if you compare what Gemini can already accomplish on Android phones — and what Siri can’t — you can get a taste of the progress coming to your iPhone, iPad, and Mac. You see, Apple is not merely using Gemini’s underlying AI tech for Apple Intelligence and Siri. It runs much deeper. 

Apple will be using Google’s AI toolkit for the next generation of Apple Foundation models. Think of these models as the secret sauce that enables Apple Intelligence features such as summarization, writing tools, image generation, and even cross-app actions. 

These models, which were introduced in 2024, can either run locally on a device (without requiring an internet connection) or on Apple’s cloud servers. A year later, Apple introduced updated versions that were faster, more capable of processing media, had better language understanding, and offered support for more languages. 

The big draw was that the Foundation Models framework would let developers tap into these on-device AI smarts and boost the user experience. Imagine opening Spotify, and instead of doing manual work, you pull up Siri and give it a command like “create a playlist with my most listened songs this month.” 

That is not possible on iPhones, yet. 

Another weakness is Siri’s inherent intelligence. Every time you ask a question that goes beyond basic queries, it will offload them to ChatGPT. With Gemini on Android devices, such as the Google Pixel 10 Pro, answers are offered instantly, and tasks can be executed in other apps seamlessly.

For example, I can tell Gemini to “send a text to Saba, asking her class status for the day on WhatsApp,” and it will comply by sending a text to my sister in the messaging app. Google’s Workspace apps and services are already well-integrated, letting users handle tasks across Gmail, Calendar, Drive, and more such services with voice commands. 

Finding information about a travel booking from my inbox, looking up a file’s content, or simply checking up calendar schedule, Gemini does it all. Siri is nowhere close to this level of convenience. And this is where Gemini comes to the rescue for Apple, as well. 

A whole new start

Apple clearly notes that Gemini will power the “next generation of Apple Foundation Models.” That means Siri will be able to understand natural language commands more seamlessly than its current robotic state, and handle tasks on the iPhone. There are plenty of benefits that can come out of this Gemini brain transplant. 

The universal search system on an iPhone or Mac will be improved and become more conversational. Tasks within Apple products, such as Notes, Music, Mail, and more, can be handled with voice or text commands without ever opening those apps. And most importantly, across other apps, as well. 

With App Intents, the company already has the framework ready to get work done across third-party apps. It hasn’t quite caught on yet, probably because the available AI models weren’t deemed smart enough by developers. With Gemini powering the on-device AI actions, more developers will confidently embrace conversational AI-powered actions in their apps. 

Imagine Siri working for you across apps, without ever opening those apps. On an iPhone, you can already get a taste of how it works. Open the ChatGPT app, enable app connectors, and with natural language commands, you can perform tasks across dozens of apps, including Apple Music. 

But there’s a caveat. You are linking another app (via login) with ChatGPT, which means OpenAI learns more about you. When the same task is executed using built-in OS-level framework, the privacy risk is theoretically smaller. Plus, the whole workflow will be more seamless.

Apple can ape Google’s Gemini strategy in a lot of other ways. It simply has to deploy Siri across its own apps, but in a less intrusive manner and more thoughtful fashion than the mindless cramming we have seen with Copilot, Alexa+, and yes, Gemini itself. Apple is good at this part, and I am pretty excited to see the company’s AI vision unfold later this year.

There’s plenty that Apple can simply learn from Gemini’s execution across Android and the web via Google services. And now that it has the Gemini brain at its hand, it can modify and integrate it within its own apps and services — in a signature Apple fashion. 

The big question is, where does that leave ChatGPT, which is already at the heart of Apple Intelligence? We’ll know more in the coming months, and more likely at Apple’s next developers conference in June. But for now, the future of Siri (and AI on Apple hardware) looks brighter than ever for an average user like me, and you!

Share.
Exit mobile version