Apple’s AI, Apple Intelligence, is boring and practical — that’s why it works

Artificial intelligence, for all its wonders, is starting to get a bad reputation among consumers. AI chatbots are prone to hallucinate — that is, make up the answers when they don’t know how to respond, confidently presenting incorrect information as if it were a fact. Google’s AI overhaul of Search went so poorly that Google had to admit that it didn’t mean to advise users to put glue on pizza or eat rocks, and later rolled back the feature for some search queries after its many mistakes. Microsoft’s AI-powered recording feature, Recall, will now be switched off by default after security researchers found concerning flaws. 

In this environment, launching an AI-powered iPhone could be seen as a risk. 

But with iOS 18, which Apple showed off at WWDC 2024, the company is taking a more cautious approach: Instead of trying to overwhelm users with too many AI features to count, the Cupertino tech giant is carefully rolling out AI where it believes it could be useful. That means the tech won’t be included where it could be much of a threat to the carefully crafted consumer experience of using an Apple device.

Not only is Apple rebranding AI to “Apple Intelligence” for its purposes, but it’s also integrating the new AI features in iOS 18 in a more practical way. 

Outside of some sillier additions like AI emoji, Apple Intelligence is coming to everyday apps and features, with additions like writing help and proofreading tools, AI summaries and transcripts, prioritized notifications, smart replies, better search, photo editing, and a version of “Do Not Disturb” that automatically understands what important messages need to come through, among other things. 

Image Credits: Apple

Combined, these features are perhaps not as exciting as a chatbot like ChatGPT that can respond to nearly any question, putting a world of knowledge, scraped up from the internet, at your fingertips. Nor are these features as mind-blowing, but fraught with controversy, as tools that let you create AI photos in any artist’s style.

Instead, Apple has defined the table stakes for what an AI-powered device should be able to do. 

For now, that means it should be able to help you make sense of what’s important from long bodies of text, whether notes, emails, documents or lots and lots of notifications. It should be able to make it easier to search for things using natural language queries, including what’s in your photos. It should be able to transcribe audio, find your grammatical and spelling errors, rewrite text in different styles, and suggest common responses. It should be able to perform basic photo edits, like removing unwanted objects or people from your pictures. And it should be able to make images upon request, but with serious guardrails in place. 

Image Credits: Apple

Presented in this way, some of the new Apple Intelligence features don’t even feel like AI, they just feel like smarter tools. 

This is an intentional move on Apple’s part. The company says it focused on use cases where it could identify specific problems that are much more solvable, rather than dealing with the complications that come with working with an AI chatbot. By narrowing its focus, Apple is more assured of providing users with expected results, not hallucinations, and can limit the dangers and safety concerns that come from AI misuse and prompt engineering. 

What’s more, Apple’s AI carefully straddles offering guidance to the end user and being a source of independent creation — the latter which doesn’t necessarily delight creators, a large demographic for Apple products. If you want to make your writing more concise or summarize an email, Apple Intelligence can help. If you want to shoot off a quick reply to an email, a suggested reply may be useful here, too. If, however, you want to create an entire bedtime story out of thin air, Apple will offer you the ability to ask ChatGPT for help with that instead. 

Image Credits: Apple

When it comes to image creation, the company follows a similar path. You can use Apple Intelligence to create images while texting a friend, but the feature relies on its understanding of the people and subjects of your conversation — where, presumably, it won’t prompt you to make an AI image if you’re texting about explicit or inappropriate topics. The same goes for adding images in other apps like Keynote, Pages and Freeform. Even in Image Playground, a new, standalone AI image-generation app, you’re guided to suggestions and limited to select styles. You can’t make photorealistic deepfakes with Apple’s app, in other words.

If you want to ask Siri a question it doesn’t have the answer to, it can offer to switch you over to ChatGPT (with your consent.) That way you can explore the wider world of chatbots and all the many answers it provides, if you choose. But when ChatGPT inevitably screws up, the mistake is on it, not on Apple. 

In fact, much of what Apple offers isn’t a way to “chat” with an AI at all. Instead, it’s a way to leverage AI for narrow use cases where a click of a button can transform text, or where the AI intuitively knows what you need to see: an urgent notification of a text from your mom, not a coupon from DoorDash. The AI is often in the background or off to the side as a tool; it’s not the main user interface for doing what needs to be done. 

Image Credits: Apple

That’s where Apple Intelligence succeeds. It feels like a new layer to your existing apps, where it’s solving everyday problems (or perhaps just letting you have fun with emoji); it’s not trying to take over the world, the way experts and fleeing OpenAI execs keep warning us AI will eventually do. Outside of a few features — like Genmoji, which is just silly — Apple Intelligence feels boring and practical. That’s why it may actually work. 

Apple Intelligence will launch in beta this fall.

Leave a Reply

Your email address will not be published. Required fields are marked *