Prompting Is A Design Act: How To Brief, Guide And Iterate With AI

About The Author

Lyndon Cerejo is a UX Design leader with over twenty-five years of hands-on experience helping companies design usable and engaging experiences for their … More about Lyndon ↬

Email Newsletter

Weekly tips on front-end & UX.
Trusted by 200,000+ folks.

Prompting is more than giving AI some instructions. You could think of it as a design act, part creative brief and part conversation design. This second article on AI augmenting design work introduces a designerly approach to prompting: one that blends creative briefing, interaction design, and structural clarity.

In “A Week In The Life Of An AI-Augmented Designer”, we followed Kate’s weeklong journey of her first AI-augmented design sprint. She had three realizations through the process:

  1. AI isn’t a co-pilot (yet); it’s more like a smart, eager intern.
    One with access to a lot of information, good recall, fast execution, but no context. That mindset defined how she approached every interaction with AI: not as magic, but as management.
  2. Don’t trust; guide, coach, and always verify.
    Like any intern, AI needs coaching and supervision, and that’s where her designerly skills kicked in. Kate relied on curiosity to explore, observation to spot bias, empathy to humanize the output, and critical thinking to challenge what didn’t feel right. Her learning mindset helped her keep up with advances, and experimentation helped her learn by doing.
  3. Prompting is part creative brief, and part conversation design, just with an AI instead of a person.
    When you prompt an AI, you’re not just giving instructions, but designing how it responds, behaves, and outputs information. If AI is like an intern, then the prompt is your creative brief that frames the task, sets the tone, and clarifies what good looks like. It’s also your conversation script that guides how it responds, how the interaction flows, and how ambiguity is handled.

As designers, we’re used to designing interactions for people. Prompting is us designing our own interactions with machines — it uses the same mindset with a new medium. It shapes an AI’s behavior the same way you’d guide a user with structure, clarity, and intent.

If you’ve bookmarked, downloaded, or saved prompts from others, you’re not alone. We’ve all done that during our AI journeys. But while someone else’s prompts are a good starting point, you will get better and more relevant results if you can write your own prompts tailored to your goals, context, and style. Using someone else’s prompt is like using a Figma template. It gets the job done, but mastery comes from understanding and applying the fundamentals of design, including layout, flow, and reasoning. Prompts have a structure too. And when you learn it, you stop guessing and start designing.

Note: All prompts in this article were tested using ChatGPT — not because it’s the only game in town, but because it’s friendly, flexible, and lets you talk like a person, yes, even after the recent GPT-5 “update”. That said, any LLM with a decent attention span will work. Results for the same prompt may vary based on the AI model you use, the AI’s training, mood, and how confidently it can hallucinate.

Privacy PSA: As always, don’t share anything you wouldn’t want leaked, logged, or accidentally included in the next AI-generated meme. Keep it safe, legal, and user-respecting.

With that out of the way, let’s dive into the mindset, anatomy, and methods of effective prompting as another tool in your design toolkit.

Mindset: Prompt Like A Designer

As designers, we storyboard journeys, wireframe interfaces to guide users, and write UX copy with intention. However, when prompting AI, we treat it differently: “Summarize these insights”, “Make this better”, “Write copy for this screen”, and then wonder why the output feels generic, off-brand, or just meh. It’s like expecting a creative team to deliver great work from a one-line Slack message. We wouldn’t brief a freelancer, much less an intern, with “Design a landing page,” so why brief AI that way?

Prompting Is A Creative Brief For A Machine

Think of a good prompt as a creative brief, just for a non-human collaborator. It needs similar elements, including a clear role, defined goal, relevant context, tone guidance, and output expectations. Just as a well-written creative brief unlocks alignment and quality from your team, a well-structured prompt helps the AI meet your expectations, even though it doesn’t have real instincts or opinions.

Prompting Is Also Conversation Design

A good prompt goes beyond defining the task and sets the tone for the exchange by designing a conversation: guiding how the AI interprets, sequences, and responds. You shape the flow of tasks, how ambiguity is handled, and how refinement happens — that’s conversation design.

Anatomy: Structure It Like A Designer

So how do you write a designer-quality prompt? That’s where the W.I.R.E.+F.R.A.M.E. prompt design framework comes in — a UX-inspired framework for writing intentional, structured, and reusable prompts. Each letter represents a key design direction, grounded in the way UX designers already think: Just as a wireframe doesn’t dictate final visuals, this WIRE+FRAME framework doesn’t constrain creativity, but guides the AI with structured information it needs.

“Why not just use a series of back-and-forth chats with AI?”

You can, and many people do. But without structure, AI fills in the gaps on its own, often with vague or generic results. A good prompt upfront saves time, reduces trial and error, and improves consistency. And whether you’re working on your own or across a team, a framework means you’re not reinventing a prompt every time but reusing what works to get better results faster.

Just as we build wireframes before adding layers of fidelity, the WIRE+FRAME framework has two parts:

  • WIRE is the must-have skeleton. It gives the prompt its shape.
  • FRAME is the set of enhancements that bring polish, logic, tone, and reusability — like building a high-fidelity interface from the wireframe.

Let’s improve Kate’s original research synthesis prompt (“Read this customer feedback and tell me how we can improve financial literacy for Gen Z in our app”). To better reflect how people actually prompt in practice, let’s tweak it to a more broadly applicable version: “Read this customer feedback and tell me how we can improve our app for Gen Z users.” This one-liner mirrors the kinds of prompts we often throw at AI tools: short, simple, and often lacking structure.

Now, we’ll take that prompt and rebuild it using the first four elements of the W.I.R.E. framework — the core building blocks that provide AI with the main information it needs to deliver useful results.

W: Who & What

Define who the AI should be, and what it’s being asked to deliver.

A creative brief starts with assigning the right hat. Are you briefing a copywriter? A strategist? A product designer? The same logic applies here. Give the AI a clear identity and task. Treat AI like a trusted freelancer or intern. Instead of saying “help me”, tell it who it should act as and what’s expected.

Example: “You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.”

I: Input Context

Provide background that frames the task.

Creative partners don’t work in a vacuum. They need context: the audience, goals, product, competitive landscape, and what’s been tried already. This is the “What you need to know before you start” section of the brief. Think: key insights, friction points, business objectives. The same goes for your prompt.

Example: “You are analyzing customer feedback for Fintech Brand’s app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts.”

R: Rules & Constraints

Clarify any limitations, boundaries, and exclusions.

Good creative briefs always include boundaries — what to avoid, what’s off-brand, or what’s non-negotiable. Things like brand voice guidelines, legal requirements, or time and word count limits. Constraints don’t limit creativity — they focus it. AI needs the same constraints to avoid going off the rails.

Example: “Only analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language.”

E: Expected Output

Spell out what the deliverable should look like.

This is the deliverable spec: What does the finished product look like? What tone, format, or channel is it for? Even if the task is clear, the format often isn’t. Do you want bullet points or a story? A table or a headline? If you don’t say, the AI will guess, and probably guess wrong. Even better, include an example of the output you want, an effective way to help AI know what you’re expecting. If you’re using GPT-5, you can also mix examples across formats (text, images, tables) together.

Example: “Return a structured list of themes. For each theme, include:

  • Theme Title
  • Summary of the Issue
  • Problem Statement
  • Opportunity
  • Representative Quotes (from data only)
  • Journey Stage(s)
  • Frequency (count from data)
  • Severity Score (1–5) where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issue
  • Estimated Effort (Low / Medium / High), where Low = Copy or content tweak; Medium = Logic/UX/UI change; High = Significant changes.”

WIRE gives you everything you need to stop guessing and start designing your prompts with purpose. When you start with WIRE, your prompting is like a briefing, treating AI like a collaborator.

Once you’ve mastered this core structure, you can layer in additional fidelity, like tone, step-by-step flow, or iterative feedback, using the FRAME elements. These five elements provide additional guidance and clarity to your prompt by layering clear deliverables, thoughtful tone, reusable structure, and space for creative iteration.

F: Flow of Tasks

Break complex prompts into clear, ordered steps.

This is your project plan or creative workflow that lays out the stages, dependencies, or sequence of execution. When the task has multiple parts, don’t just throw it all into one sentence. You are doing the thinking and guiding AI. Structure it like steps in a user journey or modules in a storyboard. In this example, it fits as the blueprint for the AI to use to generate the table described in “E: Expected Output”

Example: “Recommended flow of tasks:
Step 1: Parse the uploaded data and extract discrete pain points.
Step 2: Group them into themes based on pattern similarity.
Step 3: Score each theme by frequency (from data), severity (based on content), and estimated effort.
Step 4: Map each theme to the appropriate customer journey stage(s).
Step 5: For each theme, write a clear problem statement and opportunity based only on what’s in the data.”

R: Reference Voice or Style

Name the desired tone, mood, or reference brand.

This is the brand voice section or style mood board — reference points that shape the creative feel. Sometimes you want buttoned-up. Other times, you want conversational. Don’t assume the AI knows your tone, so spell it out.

Example: “Use the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads.”

A: Ask for Clarification

Invite the AI to ask questions before generating, if anything is unclear.

This is your “Any questions before we begin?” moment — a key step in collaborative creative work. You wouldn’t want a freelancer to guess what you meant if the brief was fuzzy, so why expect AI to do better? Ask AI to reflect or clarify before jumping into output mode.

Example: “If the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement.”

M: Memory (Within The Conversation)

Reference earlier parts of the conversation and reuse what’s working.

This is similar to keeping visual tone or campaign language consistent across deliverables in a creative brief. Prompts are rarely one-shot tasks, so this reminds AI of the tone, audience, or structure already in play. GPT-5 got better with memory, but this still remains a useful element, especially if you switch topics or jump around.

Example: “Unless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each.”

E: Evaluate & Iterate

Invite the AI to critique, improve, or generate variations.

This is your revision loop — your way of prompting for creative direction, exploration, and refinement. Just like creatives expect feedback, your AI partner can handle review cycles if you ask for them. Build iteration into the brief to get closer to what you actually need. Sometimes, you may see ChatGPT test two versions of a response on its own by asking for your preference.

Example: “After listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort).

For that top-priority theme:

  • Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate?
  • Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary).
  • Rewrite the theme entry with that improvement applied.
  • Briefly explain why the revision is stronger and more useful for product or design teams.”

Here’s a quick recap of the WIRE+FRAME framework:

Framework ComponentDescription
W: Who & WhatDefine the AI persona and the core deliverable.
I: Input ContextProvide background or data scope to frame the task.
R: Rules & ConstraintsSet boundaries
E: Expected OutputSpell out the format and fields of the deliverable.
F: Flow of TasksBreak the work into explicit, ordered sub-tasks.
R: Reference Voice/StyleName the tone, mood, or reference brand to ensure consistency.
A: Ask for ClarificationInvite AI to pause and ask questions if any instructions or data are unclear before proceeding.
M: MemoryLeverage in-conversation memory to recall earlier definitions, examples, or phrasing without restating them.
E: Evaluate & IterateAfter generation, have the AI self-critique the top outputs and refine them.

And here’s the full WIRE+FRAME prompt:

(W) You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.

(I) You are analyzing customer feedback for Fintech Brand’s app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts.

(R) Only analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language.

(E) Return a structured list of themes. For each theme, include:
  • Theme Title
  • Summary of the Issue
  • Problem Statement
  • Opportunity
  • Representative Quotes (from data only)
  • Journey Stage(s)
  • Frequency (count from data)
  • Severity Score (1–5) where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issue
  • Estimated Effort (Low / Medium / High), where Low = Copy or content tweak; Medium = Logic/UX/UI change; High = Significant changes
(F) Recommended flow of tasks:
Step 1: Parse the uploaded data and extract discrete pain points.
Step 2: Group them into themes based on pattern similarity.
Step 3: Score each theme by frequency (from data), severity (based on content), and estimated effort.
Step 4: Map each theme to the appropriate customer journey stage(s).
Step 5: For each theme, write a clear problem statement and opportunity based only on what’s in the data.

(R) Use the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads.

(A) If the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement.

(M) Unless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each.

(E) After listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort).
For that top-priority theme:
  • Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate?
  • Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary).
  • Rewrite the theme entry with that improvement applied.
  • Briefly explain why the revision is stronger and more useful for product or design teams.

You could use “##” to label the sections (e.g., “##FLOW”) more for your readability than for AI. At over 400 words, this Insights Synthesis prompt example is a detailed, structured prompt, but it isn’t customized for you and your work. The intent wasn’t to give you a specific prompt (the proverbial fish), but to show how you can use a prompt framework like WIRE+FRAME to create a customized, relevant prompt that will help AI augment your work (teaching you to fish).

Keep in mind that prompt length isn’t a common concern, but rather a lack of quality and structure is. As of the time of writing, AI models can easily process prompts that are thousands of words long.

Not every prompt needs all the FRAME components; WIRE is often enough to get the job done. But when the work is strategic or highly contextual, pick components from FRAME — the extra details can make a difference. Together, WIRE+FRAME give you a detailed framework for creating a well-structured prompt, with the crucial components first, followed by optional components:

  • WIRE builds a clear, focused prompt with role, input, rules, and expected output.
  • FRAME adds refinement like tone, reusability, and iteration.

Here are some scenarios and recommendations for using WIRE or WIRE+FRAME:

ScenariosDescriptionRecommended
Simple, One-Off AnalysesQuick prompting with minimal setup and no need for detailed process transparency.WIRE
Tight Sprints or HackathonsRapid turnarounds, and times you don’t need embedded review and iteration loops.WIRE
Highly Iterative Exploratory WorkYou expect to tweak results constantly and prefer manual control over each step.WIRE
Complex Multi-Step PlaybooksDetailed workflows that benefit from a standardized, repeatable, visible sequence.WIRE+FRAME
Shared or Hand-Off ProjectsWhen different teams will rely on embedded clarification, memory, and consistent task flows for recurring analyses.WIRE+FRAME
Built-In Quality ControlYou want the AI to flag top issues, self-critique, and refine, minimizing manual QC steps.WIRE+FRAME

Prompting isn’t about getting it right the first time. It’s about designing the interaction and redesigning when needed. With WIRE+FRAME, you’re going beyond basic prompting and designing the interaction between you and AI.

From Gut Feel To Framework: A Prompt Makeover

Let’s compare the results of Kate’s first AI-augmented design sprint prompt (to synthesize customer feedback into design insights) with one based on the WIRE+FRAME prompt framework, with the same data and focusing on the top results:

Original prompt: Read this customer feedback and tell me how we can improve our app for Gen Z users.

Initial ChatGPT Results:

  • Improve app reliability to reduce crashes and freezing.
  • Provide better guidance or tutorials for financial tools like budgeting or goal setting.
  • Enhance the transparency of Zelle transfers by showing confirmation messages.
  • Speed up app loading and reduce lag on key actions.

With this version, you’d likely need to go back and forth with follow-up questions, rewrite the output for clarity, and add structure before sharing with your team.

WIRE+FRAME prompt above (with defined role, scope, rules, expected format, tone, flow, and evaluation loop).

Initial ChatGPT Results:

Results of the structured WIRE+FRAME prompt
Results of the structured WIRE+FRAME prompt. (Large preview)

You can clearly see the very different results from the two prompts, both using the exact same data. While the first prompt returns a quick list of ideas, the detailed WIRE+FRAME version doesn’t just summarize feedback but structures it. Themes are clearly labeled, supported by user quotes, mapped to customer journey stages, and prioritized by frequency, severity, and effort.

The structured prompt results can be used as-is or shared without needing to reformat, rewrite, or explain them (see disclaimer below). The first prompt output needs massaging: it’s not detailed, lacks evidence, and would require several rounds of clarification to be actionable. The first prompt may work when the stakes are low and you are exploring. But when your prompt is feeding design, product, or strategy, structure comes to the rescue.

Disclaimer: Know Your Data

A well-structured prompt can make AI output more useful, but it shouldn’t be the final word, or your single source of truth. AI models are powerful pattern predictors, not fact-checkers. If your data is unclear or poorly referenced, even the best prompt may return confident nonsense. Don’t blindly trust what you see. Treat AI like a bright intern: fast, eager, and occasionally delusional. You should always be familiar with your data and validate what AI spits out. For example, in the WIRE+FRAME results above, AI rated the effort as low for financial tool onboarding. That could easily be a medium or high. Good prompting should be backed by good judgment.

Try This Now

Start by using the WIRE+FRAME framework to create a prompt that will help AI augment your work. You could also rewrite the last prompt you were not satisfied with, using the WIRE+FRAME, and compare the output.

Feel free to use this simple tool to guide you through the framework.

Methods: From Lone Prompts to a Prompt System

Just as design systems have reusable components, your prompts can too. You can use the WIRE+FRAME framework to write detailed prompts, but you can also use the structure to create reusable components that are pre-tested, plug-and-play pieces you can assemble to build high-quality prompts faster. Each part of WIRE+FRAME can be transformed into a prompt component: small, reusable modules that reflect your team’s standards, voice, and strategy.

For instance, if you find yourself repeatedly using the same content for different parts of the WIRE+FRAME framework, you could save them as reusable components for you and your team. In the example below, we have two different reusable components for “W: Who & What” — an insights analyst and an information architect.

W: Who & What

  1. You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.
  2. You are an experienced information architect specializing in organizing enterprise content on intranets. Your task is to reorganize the content and features into categories that reflect user goals, reduce cognitive load, and increase findability.

Create and save prompt components and variations for each part of the WIRE+FRAME framework, allowing your team to quickly assemble new prompts by combining components when available, rather than starting from scratch each time.

Behind The Prompts: Questions About Prompting

Q: If I use a prompt framework like WIRE+FRAME every time, will the results be predictable?

A: Yes and no. Yes, your outputs will be guided by a consistent set of instructions (e.g., Rules, Examples, Reference Voice / Style) that will guide the AI to give you a predictable format and style of results. And no, while the framework provides structure, it doesn’t flatten the generative nature of AI, but focuses it on what’s important to you. In the next article, we will look at how you can use this to your advantage to quickly reuse your best repeatable prompts as we build your AI assistant.

Q: Could changes to AI models break the WIRE+FRAME framework?

A: AI models are evolving more rapidly than any other technology we’ve seen before — in fact, ChatGPT was recently updated to GPT-5 to mixed reviews. The update didn’t change the core principles of prompting or the WIRE+FRAME prompt framework. With future releases, some elements of how we write prompts today may change, but the need to communicate clearly with AI won’t. Think of how you delegate work to an intern vs. someone with a few years’ experience: you still need detailed instructions the first time either is doing a task, but the level of detail may change. WIRE+FRAME isn’t built only for today’s models; the components help you clarify your intent, share relevant context, define constraints, and guide tone and format — all timeless elements, no matter how smart the model becomes. The skill of shaping clear, structured interactions with non-human AI systems will remain valuable.

Q: Can prompts be more than text? What about images or sketches?

A: Absolutely. With tools like GPT-5 and other multimodal models, you can upload screenshots, pictures, whiteboard sketches, or wireframes. These visuals become part of your Input Context or help define the Expected Output. The same WIRE+FRAME principles still apply: you’re setting context, tone, and format, just using images and text together. Whether your input is a paragraph or an image and text, you’re still designing the interaction.

Have a prompt-related question of your own? Share it in the comments, and I’ll either respond there or explore it further in the next article in this series.

From Designerly Prompting To Custom Assistants

Good prompts and results don’t come from using others’ prompts, but from writing prompts that are customized for you and your context. The WIRE+FRAME framework helps with that and makes prompting a tool you can use to guide AI models like a creative partner instead of hoping for magic from a one-line request.

Prompting uses the designerly skills you already use every day to collaborate with AI:

  • Curiosity to explore what the AI can do and frame better prompts.
  • Observation to detect bias or blind spots.
  • Empathy to make machine outputs human.
  • Critical thinking to verify and refine.
  • Experiment & Iteration to learn by doing and improve the interaction over time.
  • Growth Mindset to keep up with new technology like AI and prompting.

Once you create and refine prompt components and prompts that work for you, make them reusable by documenting them. But wait, there’s more — what if your best prompts, or the elements of your prompts, could live inside your own AI assistant, available on demand, fluent in your voice, and trained on your context? That’s where we’re headed next.

In the next article, “Design Your Own Design Assistant”, we’ll take what you’ve learned so far and turn it into a Custom AI assistant (aka Custom GPT), a design-savvy, context-aware assistant that works like you do. We’ll walk through that exact build, from defining the assistant’s job description to uploading knowledge, testing, and sharing it with others.

Resources

Smashing Editorial (yk)