Design Patterns For AI Interfaces

About The Author

Vitaly Friedman loves beautiful content and doesn’t like to give in easily. When he is not writing, he’s most probably running front-end & UX … More about Vitaly ↬

Email Newsletter

Weekly tips on front-end & UX.
Trusted by 200,000+ folks.

Designing a new AI feature? Where do you even begin? From first steps to design flows and interactions, here’s a simple, systematic approach to building AI experiences that stick. More design patterns in our Smart Interface Design Patterns, a friendly video course on UX and design patterns by Vitaly — from complex data tables and nested filters to FAQs and error messages.

So you need to design a new AI feature for your product. How would you start? How do you design flows and interactions? And how do you ensure that that new feature doesn’t get abandoned by users after a few runs?

In this article, I’d love to share a very simple but systematic approach to how I think about designing AI experiences. Hopefully, it will help you get a bit more clarity about how to get started.

This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up soon. Jump to table of contents.

The Receding Role of AI Chat

One of the key recent shifts is a slow move away from traditional “chat-alike” AI interfaces. As Luke Wroblewski wrote, when agents can use multiple tools, call other agents and run in the background, users orchestrate AI work more — there’s a lot less chatting back and forth.

AI Experience Paradigm by Luke Wroblewski
Messaging UI slowly starts feeling dated, and chat UI fades into background. By Luke Wroblewski. (Large preview)

In fact, chatbots are rarely a great experience paradigm — mostly because the burden of articulating intent efficiently lies on the user. But in practice, it’s remarkably difficult to do well and very time-consuming.

Chat doesn’t go away, of course, but it’s being complemented with task-oriented UIs — temperature controls, knobs, sliders, buttons, semantic spreadsheets, infinite canvases — with AI providing predefined options, presets, and templates.

AI Experience Paradigm by Luke Wroblewski
Agentic AI design patterns, with more task-oriented UIs, rather than chat. By Luke Wroblewski. (Large preview)

There, AI emphasizes the work, the plan, the tasks — the outcome, instead of the chat input. The results are experiences that truly amplify value for users by sprinkling a bit of AI in places where it delivers real value to real users.

To design better AI experiences, we need to study 5 key areas that we need to shape.

Input UX: Expressing Intent

Conversational AI is a very slow way of helping users express and articulate their intent. Usability tests show that users often get lost in editing, reviewing, typing, and re-typing. It’s painfully slow, often taking 30-60 seconds for input.

As it turns out, people have a hard time expressing their intent well. In fact, instead of writing prompts manually, it’s a good idea to ask AI to write a prompt to feed itself.

Illustration: How users can express their intent in AI interfaces.
Flora AI allows you to modify images and videos via nodes. (Large preview)

With Flora AI, users can still write prompts, but they visualize their intent with nodes by connecting various sources visually. Instead of elaborately explaining to AI how we need the pipeline to work, we attach nodes and commands on a canvas.

Illustration of Output UX
With Krea.ai, users can move abstract shapes (on the left) to explain their goal to AI and study the outcome (on the right). (Large preview)

With input for AI, being precise is slow and challenging. Instead, we can abstract away the object we want to manipulate, and give AI precise input by moving that abstracted object on a canvas. That’s what Krea.ai does.

In summary, we can minimize the burden of typing prompts manually — with AI-generated pre-prompts, prompt extensions, query builders, and also voice input.

Output UX: Displaying Outcomes

AI output doesn’t have to be merely plain text or a list of bullet points. It must be helpful to drive people to insights, faster. For example, we could visualize output by creating additional explanations based on the user’s goal and motivations.

Illustration of Output UX
Visualizing outcome through style lenses. By Amelia Wattenberger. (Large preview)

For example, Amelia Wattenberger visualized AI output for her text editor PenPal by adding style lenses to explore the content from. The output could be visualized in sentence lengths and scales Sad — Happy, Concrete — Abstract, and so on.

Illustration of Output UX
Aino.ai, an AI GIS Analyst for urban planning. (Large preview)

The outcome could also be visualized on a map, which, of course, is expected for an AI GIS analyst. Also, users can access individual data layers, turn them on and off, and hence explore the data on the map.

We can also use forced ranking and prioritizations to suggest best options and avoid choice paralysis — even if a user asks for top 10 recommendations. We can think about ways to present results as a data table, or a dashboard, or a visualization on a map, or as a structured JSON file, for example.

Refinement UX: Tweaking Output

Users often need to cherry-pick some bits from the AI output and bring them together in a new place — and often they need to expand on one section, synthesize bits from another section, or just refine the outcome to meet their needs.

Illustration of Output UX
Adobe Firefly suggests options and sliders to adjust the outcome. (Large preview)

Refinement is usually the most painful part of the experience, with many fine details being left to users to explain elaborately. But we can use good old-fashioned UI controls like knobs, sliders, buttons, and so on to improve that experience, similar to how Adobe Firefly does it (image above).

Illustration of Output UX
Presets living on the side in Elicit, an example by Maggie Appleton. (Large preview)

We can also use presets, bookmarks, and allow users to highlight specific parts of the outcome that they’d like to change — with contextual prompts acting on highlighted parts of the output, rather than global prompts.

Illustration of Output UX
Tweaking specific parts of the outcome, on Grammarly. (Large preview)

AI Actions: Tasks To Complete

With AI agents, we can now also allow users to initiate tasks that AI can perform on their behalf, such as scheduling events, planning, and deep research. We could also ask to sort results or filter them in a specific way.

Illustration of Output UX
Suggesting actions on Elicit, an example by Maggie Appleton. (Large preview)

But we can also add features to help users use AI output better — e.g., by visualizing it, making it shareable, allowing transformations between formats, or also posting to Slack, Jira, and so on.

AI Integration: Where Work Happens

Many AI interactions are locked within a specific product, but good AI experiences happen where the actual work happens. It would be quite unusual to expect a dedicated section for Autocomplete, for example, but we do so for AI features.

Illustration of AI Integration UX
(Large preview)
Illustration of AI Integration UX
DoveTail AI integrates in plenty of platforms, from Jira and Notion to Slack and Teams, where the actual work happens. (Large preview)

The actual boost in productivity comes when users rely on AI as a co-pilot or little helper in the tools they use daily for work. It’s seamless integrations into Slack, Teams, Jira, GitHub, and so on — the tools that people use anyway. Dia Browser and Dovetail are great examples of it in action.

Wrapping Up

Along these five areas, we can explore ways to minimize the cost of interaction with a textbox, and allow users to interact with the points of interest directly, by tapping, clicking, selecting, highlighting, and bookmarking.

Many products are obsessed with being AI-first. But you might be way better off by being AI-second instead. The difference is that we focus on user needs and sprinkle a bit of AI across customer journeys where it actually adds value.

And AI products don’t have to be AI-only. There is a lot of value in mapping into the mental models that people have adopted over the years, and enhance them with AI, similar to how we do it with browsers’ autofill, rather than leaving users in front of a frightening and omnipresent text box.

Useful Resources

Meet “Smart Interface Design Patterns”

You can find more details on design patterns and UX in Smart Interface Design Patterns, our 15h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off.

Smart Interface Design Patterns
Meet Smart Interface Design Patterns, our video course on interface design & UX.

Video + UX Training

$ 495.00 $ 699.00 Get Video + UX Training

25 video lessons (15h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 300.00$ 395.00
Get the video course

40 video lessons (15h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

Smashing Editorial (yk)