An Apple insider just revealed how iOS 18’s AI features will work
As Apple’s Worldwide Developers Conference (WWDC) inches closer, the chatter around the company’s AI work has taken a feverish turn. In a year when smartphone and computing brands have focused solely on AI niceties, Apple has been uncharacteristically silent around the AI hype — eliciting concern about the brand missing the train.
However, a new report has given us a closer look at how Apple’s AI dreams may come to fruition with its iOS 18 update later this year.
New details on Apple’s AI plans
It seems Apple is very much in the game, but with a slightly different approach than its rivals. “Apple has been developing a large language model — the algorithm that underpins generative AI features — and all indications suggest that it will be entirely on-device. That means the technology is powered by the processor inside the iPhone, rather than in the cloud,” reports Bloomberg.
Rumors of an internal tool code-named “AppleGPT” have been circulating around for a while now. But it seems Apple hasn’t quite reached the level of finesse that the likes of Google and Microsoft (with OpenAI) have achieved with tools like ChatGPT, Copilot, and Gemini. This also explains recent reports claiming that Apple might license Gemini AI from Google – just like Samsung and OnePlus – for iPhones instead of serving a product that doesn’t quite stand out.
Or, to put it more accurately, it doesn’t live up to Apple’s standards. Generative AI tools, even those built atop the largest data sets out there, continue to fail in a rather spectacular fashion, at least in their early days. Google recently had to apologize over a damning flub with Gemini AI’s text-to-image system. Meta’s AI is not too far behind. Then there’s the whole storm brewing over copyright laws, fair disclosure, and training transparency, which is something Apple would want to avoid.
But it seems that instead of selling the proverbial AI snake oil, Apple wants to take a more cautious approach. “Rather than touting the power of chatbots and other generative AI tools, Apple plans to show how the technology can help people in their daily lives,” adds the Bloomberg report.
Ever since ChatGPT arrived on the scene and kicked off an AI revolution, we have witnessed a flood of AI tools capable of everything from generating realistic pictures and cloning voices to making photorealistic videos from text and engaging in kinky chat as a virtual partner. Yet, the biggest question is just how practically rewarding are these flashy tricks for an average consumer on a day-to-day basis.
But that doesn’t mean Apple is not attempting to stand out in the AI race. Quite the contrary, actually. Over the past few months, Apple has released multiple research papers documenting an AI tool called MGIE that’s capable of tricks like media editing with voice commands. Another one details MM1, a multimodal large language model that opens the doors for “enhanced in-context learning and multi-image reasoning.”
How far is Apple in the AI race?
We recently dissected another piece of Apple research that focuses on AI making sense of on-screen content and assisting users accordingly. The following thread by an Apple engineer on X, formerly known as Twitter, details the progress Apple has made compared to rivals like Google’s Gemini AI model:
This is just the beginning. The team is already hard at work on the next generation of models. Huge thanks to everyone that contributed to this project!
— Brandon McKinzie (@mckbrando) March 15, 2024
Other papers have discussed AI within the purview of privacy and security, which is not surprising for Apple. The on-device approach mentioned above is central to this whole privacy approach. Running AI models on-device would ensure that no data leaves the iPhone. That contrasts with sending user requests to a cloud server, a strategy that also slows down the whole human-AI interaction chain.
Plus, Apple already has the core hardware ready. The company has been shipping a neural processing unit (NPU) in iPhones since 2017. It’s a dedicated AI accelerator hardware, which works in the same vein as the Tensor Processing Unit (TPU) inside Google’s Pixel smartphones that are now capable of running the Gemini model on-device. Interestingly, Apple also started laying the foundations a while ago.
At WWDC 2022, the company released what it calls “an open-source reference PyTorch implementation of the Transformer architecture.” Transformers are the foundation tool behind the whole generative AI technology. This Financial Times article is an excellent (and palatable) explanation of the transformers tech that originated out of a Google research paper back in 2017, the same year we got an NPU inside the iPhone X.
The latest Bloomberg report notes that Apple will offer a glimpse of its AI approach at WWDC 2024, which kicks off in June. Will an on-device generative AI approach finally make Siri smarter, the way Google has been trying to supercharge Google Assistant lately? Only time will tell.