While Apple’s assistive technology announcements this week are important, the question that remains unaswered is just how much they rely on the company's powerful Neural Engine.
The Neural Engine comprises a collection of specialized computational cores that exist on Apple Silicon chips. They're designed to execute machine/artificial intelligence functions quickly and with great efficiency because the action takes place on chip.
The company has dedicated huge resources to Neural Engine improvements since it first appeared in 2017. Apple Wiki points out that the A16 chip inside iPhone 14 delivers 17 trillion operations per second, up from 600 billion/s in 2017’s A11 processor.
So, how is Apple using the Neural Engine?
How Apple uses Neural Engine
- Think about FaceID, animated Memojis, or on-device search for items such as images of dogs in Photos. Developers use the Neural Engine when they create apps that support CoreML, such as Becasso or Style Art. But the Neural Engine is capable of more. And that’s what Apple’s accessibility enhancements show.
- Think about Detection Mode in Magnifier. In that mode, your iPhone will recognize the buttons on items around your home, tell you what the function of that button is, and help guide your hand. That’s powerful tech that relies on the camera, LiDAR scanner, machine learning – and the Neural Engine on the processor.
- Think about the new Personal Voice feature that lets users make a voice that sounds like their own, which their device can then use to speak words that they type. This is useful for people about to lose their voice, but once again relies on on-device analysis of speech and the clever skills buried inside the Neural Engine.
These are all computationally intensive tasks, both rely on on-device intelligence rather than the cloud, and are designed to maintain privacy and make use of the dedicated AI cycles inside every Apple device.
The Neural Engine can do much more
I don’t think these tasks really touch all the Neural Engine is capable of. Because for all the promise of this kind of AI, the game to make it run natively on edge devices has already begun, and Apple has put so much work into building its Neural Engine it would seem strange if it didn’t have a few cards to play.
All the same, the ultimate ambition will — and must — be to deliver these technologies outside the data center. One of the many lesser shared truths around Generative AI is how much energy it takes to run. Any company that wants to constrain its carbon emissions and meet climate targets will want to run those tasks on the device, rather than in a server farm. And Apple is committed to meeting its climate goals. The best way to achieve them while using similar tech is to develop on-device AI, which has a home on Neural Engine.
If this is how Apple sees it, it isn’t alone. Google’s PaLM 2 proves that company’s interest. Chipmakers such as Qualcomm see edge processing of such tasks as an essential way to cut the costs or the tech. At this time, there are numerous open-source language models capable of delivering generative AI features; Stanford University has already been able to make one run on a Google Pixel phone (albeit with added hallucinations), so running them on iPhone should be a breeze.
Irt should be even easier on an M2 chip, such as those already used in Macs, iPads, and (soon) the Reality Pro.
One way in which to cut the cost of this kind of AI, while reducing the size of the language model and increasing accuracy by protecting against AI-created "alternative facts," is to limit the technology to select domains. These might be within key office productivity apps, but also for the purposes of accessibility, enhanced user interface components, or augmenting search experiences.
This seems to be the approach we’re seeing across the industry, as developers such as Zoom find ways to integrate AI into existing products in valuable ways, rather than adopt a scatter gun approach. Apple’s approach also shows a focus on key verticals.
In terms of how Apple intends to develop its own AI technologies, it feels terribly unwise to ignore the data it may have gathered through its work in search across the last decade. Has Applebot really been just about deal-making with Google? Might that data contribute to development of Apple’s own LLM-style model?
At WWDC, it may be interesting to see if one way it intends to use AI might be to power image-generation models for its AR devices. Is that form of no code/low code AI-driven experience a component of the super-easy development environment we’ve previously heard Apple plans?
In an ideal world, users would be able to harness the power of these new machine intelligence models privately, on their device and with minimal energy. And given this is precisely what Apple built Neural Engine to achieve, perhaps silly Siri was just the front end to a greater whole — a stalking horse with a poker face. We don’t know any of these answers yet, but it may be something everyone knows by the time the California sun sets on the special Apple developer event at Apple Park on June 5.
Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.