Select Page

📱 On-Device AI, Done Right 

Today at WWDC, Apple introduced something that could meaningfully shift how we build AI: a framework that puts intelligence on-device, offline, and under user control. 

It’s called the Foundation Models framework, and it could open up powerful new possibilities for developers committed to building human-centered, privacy-respecting tools powered by AI.

The big picture: For the first time, Apple’s on-device LLM isn’t just powering Siri. It’s in our hands. Developers can now tap directly into the same foundation models to build fast, private, intelligent features inside our own apps.

This new framework gives us access to the same models powering features like Visual Intelligence, Genmoji, Live Translation, natural-language search, and mail summaries — all running directly on-device. That means no cloud inference, no user data leaving the phone, and no cost per request. Just fast, private, real-time AI powered by Apple Silicon. [More details here.]

If you’re an iPhone user, this is one of those rare cases where our needs — privacy, speed, and simplicity — align directly with Apple’s. They want to sell the chip and the device. We want the intelligence, without sacrificing control or giving up our data. 

For those of us who are focused on harnessing the power of AI in ways that align with and support our humanness, this lands for me as the most encouraging news I’ve heard recently from any of the Big Tech companies… and I’ve already started envisioning how the Foundation Models Framework can serve a few projects that are already in-flight in our Creative Powerup community

If you want to dig deeper, check out the WWDC recording at the 6:00 mark (start at the 5:00 mark for more context about Apple’s AI strategy).

For a deeper dive, check out this article on Apple’s Machine Learning site. As a major proponent of human-friendly, privacy-respecting AI development, I’m encouraged to read Apple’s “Focus on Responsible AI Development” statement. To save you the jump, I’ve pasted it below…

👇 👇 👇

Our Focus on Responsible AI Development

Apple Intelligence is designed with our core values at every step and built on a foundation of groundbreaking privacy innovations.

Additionally, we have created a set of Responsible AI principles to guide how we develop AI tools, as well as the models that underpin them:

  1. Empower users with intelligent tools: We identify areas where AI can be used responsibly to create tools for addressing specific user needs. We respect how our users choose to use these tools to accomplish their goals.
  2. Represent our users: We build deeply personal products with the goal of representing users around the globe authentically. We work continuously to avoid perpetuating stereotypes and systemic biases across our AI tools and models.
  3. Design with care: We take precautions at every stage of our process, including design, model training, feature development, and quality evaluation to identify how our AI tools may be misused or lead to potential harm. We will continuously and proactively improve our AI tools with the help of user feedback.
  4. Protect privacy: We protect our users’ privacy with powerful on-device processing and groundbreaking infrastructure like Private Cloud Compute. We do not use our users’ private personal data or user interactions when training our foundation models.

These principles are reflected throughout the architecture that enables Apple Intelligence, connects features and tools with specialized models, and scans inputs and outputs to provide each feature with the information needed to function responsibly.

The journey continues… 

Apple’s new Foundation Models Framework opens a new chapter for AI development; one where privacy, performance, and purpose aren’t in conflict. I’ll be mapping what this means in practice, and sharing what emerges as we build forward.