The One Design Principle That Makes AI Feel Like Magic

The One Design Principle That Makes AI Feel Like Magic

Great AI feels like magic because it translates intent into outcomes without making the user do the translation. You ask for something and receive a convincing, direct result, not a list of links to sift through, or tools to assemble an answer.

Historically, software gave us interfaces. Search bars, filters, menus, creation tools. Users were responsible for expressing intent in the system’s language, navigating complexity, and choosing from static outputs.

AI changes that responsibility model. Intent resolution shifts from the user to the system. This shift from user-led navigation to system-led resolution is the defining design change of AI products.

I. The one design principle

AI can summarize, generate content, reason, and automate tasks. But at the product level, it consistently does one thing well: it resolves intent, quickly.

It does this through two capabilities working together: improved intent capture and personalized outputs. When these align, users stop translating themselves to machines and start receiving outcomes that feel immediately usable.

This principle is the standard against which AI products can be judged. When it is violated, products fail, regardless of model quality.

What’s wrong with AI products today

The most common AI product failures come from neglecting this principle. They apply AI without identifying what user intent, if any, the system is meant to resolve.

These failures tend to fall into three categories:

  1. AI where no intent gap exists: For example, Instagram Stories “AI-enhanced replies.” There is no meaningful intent to resolve in a one-line social response.
  2. AI that conflicts with the user’s intent: For example, AI injected into the WhatsApp search bar, where the user’s intent is simply to find a chat or contact, not to “ask” anything.
  3. Forcing an AI product to do what it is not designed to do: For example, treating chatbots as a universal interface for complex, multi-stage work like writing, where intent, structure, and taste have not yet formed.

To understand how intent resolution shapes design, we need to look at how intent is formed, and how it's best resolved.

II. Understanding Intent

Intent is not a single query. It forms across layers:

  • Character, stable preferences and taste.
  • Context, what matters now, time, location.
  • User query, what the user is trying to resolve now.

Historically, systems captured these poorly, relying on crude history for character, weak context use, and precise keywords. Today, they can infer character, reason about context, and resolve a user query without exact phrasing. A Spotify user at 10 p.m. on a Friday is not asking for “Top 50,” but for something that fits their taste, emotional end of week state, and an emergent direction.

Intent also unfolds along a convergence spectrum. A user may start with a vague dissatisfaction, “I’m not happy with my look,” move toward a general direction, “I want to change my style,” narrow into a category, “I want new shoes,” and eventually reach a concrete request, “I want black ankle boots.” Great AI adapts to where the user is: executing when intent is clear, guiding when it is forming, and supporting exploration when it is vague.

Products fail when they assume high convergence too early, a common issue in chat interfaces that produce confident outputs before the user’s intent has fully formed.

Two types of intent

Not all intent should be resolved the same way. There are two fundamentally different classes.

Problem-based intent is objective. Goals are clear, success criteria are explicit, and resolution can be optimized. Tasks like coding, logistics, ads optimization, and many enterprise workflows collapse well because evaluation is stable and mistakes are easy to detect.

Taste-based intent is subjective. Goals are ambiguous, preferences are latent, and success is as much psychological as it is functional. Domains like music, fashion, travel, and entertainment succeed when users stop second-guessing, not when the system finds a “best” answer.

Treating taste-based intent like problem-based intent produces confident outputs that feel wrong, and quickly erodes user trust. This is often felt in chat interfaces, where follow-up suggestions are too direct before the user has finished expressing their taste, feeling more like intrusion than assistance.

III. Resolving Intent

Capture progressively, but assertively

The system should make its intent-resolution process clear from the start, including what it is trying to resolve and why each input is needed. Every interaction should feel necessary and move the user closer to an outcome, supported by a UI designed for progression rather than open-ended chat.

When input is required, it should be justified, natural, and low-friction. A good salesperson does the same: they explain the process, assess relevant factors like fit or structure, and guide the decision with expertise. The product should signal that it is resolving intent through best practices, not asking arbitrary follow-up questions.

Deliver complete outputs

If a user has to Google after you “resolved” a task, intent was not resolved. A complete output gives the user enough information to commit. AI makes this possible by moving beyond static lists to personalised outputs that reflect what actually matters to the user. Completeness is not about volume, but about relevance.

Choosing an Airbnb has no single correct answer. A convenience-driven user needs commute clarity. A fun-driven user needs neighborhood fit. A budget-driven user needs explicit trade-offs. Especially on a mobile screen, completeness means prioritizing the right information, not showing everything.

For taste-based decisions, completeness goes even beyond similarity or utility. People commit based on meaning, context, and self-association, not just functional fit. Music is one example: a track can matter because of who made it or what it represents, not just how it sounds. By surfacing information about the track most likely to resonate with the user, systems reduce uncertainty and make confident adoption more likely.

Case study: Writing

Writing with AI appears solved, yet it still feels wrong most of the time. You ask a chatbot to write a document and it delivers one, but you end up editing most of it, defeating the purpose. The system resolved intent too early, acting as if the goal was clear when it was still forming, which is why the interaction requires constant correction.

Writing is a taste-based, multi-stage task. Accuracy matters, but so do angle, structure, tone, and point of view. These decisions emerge progressively and cannot be reliably conveyed in a single prompt. A system that executes before intent has sufficiently converged will almost always produce the wrong result. This is not a prompting failure, it is a product design failure.

A better writing system would delay execution and build intent in stages. It would first help surface the core argument, then explore structural options, guide positioning and tone, and only then produce text. Execution happens once intent crosses a confidence threshold, with the user steering at each stage, rather than correcting a premature draft afterward.

IV. Design implications

These design rules apply to both consumer and enterprise systems. Consumer intent often converges quickly. Enterprise intent forms across extended workflows, trade-offs, and alignment (see the AI-augmented organization post). The surface changes. The principle does not.

Beyond Apps

Intent resolution is the primitive of the technology, and it extends beyond individual products to operating systems. Apps exist mainly because legacy operating systems are navigation systems, users have to decide where to go before they can act. AI-native systems reverse this. A request like “places to stay in Madrid” should produce complete, personalized options directly, not send the user through apps or links. Apps still exist. Navigation becomes secondary. The real OS-level value is in routing tasks to outcomes, not in surface features like notification summaries.

AI devices should also follow the same intent-resolution principle, but the form factor defines them. They only work when they are seamless and context-aware. And they fail when they try to replace the phone without matching its ability to capture intent and resolve it quickly, as seen with products like the R1 or the Humane Pin. Glasses are the strongest candidate because they support both real-time capture and on-screen resolution in a single surface.

More subtle devices, maybe a ring, for users who want less bulky hardware, only make sense if intent capture is separated from resolution. Without a screen, they should capture intent and defer resolution to the phone, for example snapping a photo of a jacket to explore its style later, or recording a quick voice note to develop an idea later. In this model, devices do not replace the phone, they capture intent on the go and let the user resolve it later, which is what they would have done anyway.

Earning Agency

From these rules, it follows that building agentic/completely autonomous products is difficult early on. This is especially true in taste-based domains, where there is no objective truth to optimize against. An agent can't reliably resolve users’ intent.

At this stage, AI agency is earned as much as it is built. Users accept autonomy only after a system reliably resolves intent without rework, similar to how trust develops with a friend whose recommendations prove right over time.

Products then should be designed to earn agency gradually. They should start assistive and transparent, move to semi-agentic behavior with light oversight, and become fully agentic only once the user chooses to delegate. The key metric is not autonomy, but how often the user needs to intervene.

Problem-based intent reaches acceptable agency faster because users primarily care about outcomes. Taste-based intent builds trust more slowly as shared judgment develops.

V. Conclusion

The real opportunity is not to add AI everywhere, but to focus on areas with large, unresolved intent gaps, places where users are still translating themselves into interfaces, stitching tools together, or second-guessing outcomes. Adding AI summaries or incremental features where there is barely any intent to resolve does not create leverage. It creates noise.

The industry has been in a phase of rapid experimentation, shipping features quickly and iterating to learn what works, which is appropriate at this stage. As understanding matures, the opportunity shifts toward coherence. This calls for leadership that can make thoughtful trade-offs about where AI is most valuable, and for storytelling that both engages users and helps set shared direction. When that direction is clear, teams across an ecosystem can align around common goals. Without it, even powerful AI risks becoming a set of clever but disconnected features.