Design and implement AI input patterns for products. Use this skill whenever the user wants to add an AI-powered input mechanism to their product, improve how users interact with AI features, decide which input pattern fits a use case, or audit existing AI input UX. Trigger on phrases like "how should users prompt this", "add AI input to", "let users control the AI with", "what input pattern should I use", "design an AI prompt experience", "how do I let users fill fields with AI", "add a regenerate button", "inline AI actions", or any request about how users should interact with or direct AI in the product. Always use this skill before designing or recommending any AI interaction surface.
This skill covers the full taxonomy of AI input patterns: how users direct, refine, and control AI in a product. Use it to decide which pattern fits a given use case and how to implement it well.
There are 13 distinct input patterns. Start by identifying which category the use case falls into, then read the relevant section below.
| Pattern | Core purpose | When to reach for it | |---|---|---| | Open Input | Free-form natural language prompt | Discovery, chat, exploration | | Madlibs | Structured variables in a template | Repeatable tasks, team consistency | | Auto-fill | AI populates fields from context | Repetitive data, spreadsheets, forms | | Inline Action | Preset actions on selected content | Spot edits without leaving flow | | Inpainting | AI edits a specific region in-place | Surgical changes to generated content | | Regenerate | Re-run same prompt for a new result | When output is close but not right | | Expand | Extend content from a seed | Draft-to-full, clip-to-video | | Restructure | Change structure, keep substance | Condense, reorder, extract, segment | | Restyle | Change surface style, keep structure | Tone, palette, voice, genre | | Chained Action | Multi-step connected prompts | Workflows, pipelines, agentic flows | | Auto-fill (bulk) | Loop prompt across many records | Batch enrichment, bulk generation | | Describe | Reverse-engineer a generation | Debug, reproduce, understand output | | Summary | Faithful compression of source | Recaps, digests, meeting notes | | Synthesis | Interpret and connect across sources | Research, analysis, insight generation |
Free-form text box that lets users converse with or direct the model.
Forms: Chat box ยท Inline composer ยท Command + parameters ยท Side panel composer
Core design rules:
Pair with: Madlibs (guide novices), Parameters (precision), Inline Action (scoped edits)
Template-style input with named variables users fill in. The AI receives the assembled prompt.
Best for: PRDs, release notes, outreach emails, any repeatable structured generation.
Core design rules:
Pair with: Chained Action (carry variables forward), Templates (prompt library)
AI runs a prompt across multiple fields or records at once, from a single instruction.
Forms:
Core design rules:
Pair with: Sample Response (preview before bulk run), Verification (human gate), Chained Action (as a workflow step)
Preset AI actions that appear when content is selected or highlighted.
Types of inline actions:
Core design rules:
Pair with: Inpainting (for richer region-based edits), Verification (accept/reject), Transform (modality shift)
User selects a region of content; AI edits only that region without touching the rest.
Works across: Text (highlight โ edit), Images (brush โ reprompt), Audio (time selector โ regenerate section), Code (select function โ replace)
Core design rules:
Pair with: Inline Action (trigger inpainting from selection), Verification (commit gate), Variations (compare options)
Re-runs the same prompt + context through the model to produce a new result.
Modes:
Guided forms:
Core design rules:
Pair with: Variations (compare), Draft Mode (iterate cheaply), Randomize (unguided exploration)
Builds on an existing piece of content without replacing or altering the original seed.
By medium:
Core design rules:
Pair with: Variations (branch expansions), Draft Mode (cheap early iterations), Open Input (prompt the expansion)
Changes the structural form of content while keeping its substance intact.
Types:
Core design rules:
Pair with: Inpainting (target to a region), Variations (compare before committing)
Changes the surface style of content โ tone, voice, palette, aesthetic โ while leaving structure and meaning intact.
By medium:
Core design rules:
Pair with: Memory (persist style choices across sessions), Preset Styles (gallery), Transform (when modality needs to change too)
Connects multiple prompts, tools, and inputs in a structured sequence. Each step's output feeds the next.
Forms:
Core design rules:
Pair with: Madlibs (inject variables at each step), Sample Response (test before publishing), Verification (gate steps on human review)
User-invoked action that reverse-engineers a generated output into its likely prompt, parameters, and tokens.
Typical triggers: Right-click menu ยท Side panel button ยท /describe command
Core design rules:
Pair with: Prompt Enhancer (iterate on described prompts), Prompt Details (surface details proactively in galleries)
Faithfully condenses source material to make it easier to understand and act on. No new interpretation introduced.
Difference from Synthesis: Summary = compression. Synthesis = interpretation + patterns across sources.
Core design rules:
Pair with: Citations (verify source mapping), References (link to originals), Follow-ups (next steps from the summary)
Combines data from multiple sources and extracts patterns, themes, or insights. Introduces AI reasoning โ this is the key distinction from Summary.
Variants:
Core design rules:
Pair with: Stream of Thought (show reasoning), Citations (link claims to sources), Summary (when no interpretation is needed)
Use this decision flow when the use case isn't immediately obvious:
Is the user starting from scratch or working on existing content?
โโโ Starting from scratch โ Open Input, Madlibs, or Chained Action
โโโ Working on existing content โ
What scope?
โโโ Whole document/record โ Regenerate, Restructure, Restyle, Summary, Synthesis
โโโ Specific region โ Inpainting, Inline Action
โโโ Multiple fields/records โ Auto-fill
โโโ Building from a seed โ Expand
Does the task repeat?
โโโ Yes, same structure โ Madlibs or Auto-fill
โโโ No, one-off โ Open Input or Inline Action
Is the goal to change structure or style?
โโโ Structure (condense, reorder, extract) โ Restructure
โโโ Style (tone, palette, voice) โ Restyle
โโโ Both โ Restructure first, then Restyle
Is the user analyzing sources or compressing them?
โโโ Compressing faithfully โ Summary
โโโ Interpreting and finding patterns โ Synthesis
Regardless of which pattern you use:
Apply AI Trust Builder design patterns to give users confidence that an AI product's results are ethical, accurate, and trustworthy. Use this skill whenever a designer, PM, or developer wants to make their AI product feel safer, more transparent, or more accountable. Trigger on: "make users feel safe", "add a disclaimer", "handle user data", "label AI-generated content", "privacy mode", "disclose AI is being used", "watermark AI outputs", "make the AI more transparent", "audit trail for AI", "user consent for recording", or any request touching AI accountability, privacy, explainability, or honest representation of what AI is doing. Also use when auditing an existing AI product for trust signals or when building new AI features into a non-AI-native product. Covers seven patterns: Caveat, Consent, Data Ownership, Disclosure, Footprints, Incognito Mode, and Watermark.
Apply AI Tuner design patterns when adding or improving AI features in a product. Tuners are the controls that let users shape how AI interprets input and produces output โ before, during, or after generation. Use this skill whenever the user wants to add AI configuration UI to a product, improve how users control AI behavior, design prompt controls, model selectors, filters, style systems, voice/tone settings, or any mechanism that lets users influence what the AI does. Trigger on phrases like "let users control the AI", "add model switching", "prompt settings", "AI configuration", "let users set tone or style", "negative prompting", "AI filters", "mode switching", "AI parameter controls", or any request to give users more agency over AI output. This skill covers eight tuner patterns: Attachments, Connectors, Filters, Model Management, Modes, Parameters, Preset Styles, Saved Styles, and Voice & Tone.
Apply Wayfinder patterns to design or improve AI onboarding, discoverability, and first-interaction flows in any product. Use this skill whenever the user wants to add AI to a product surface, reduce blank-slate anxiety, help users discover what the AI can do, improve an initial CTA or prompt input, add suggestions or templates, design a gallery, add nudges, or generally reduce friction at the start of an AI interaction. Trigger even on vague requests like "make it easier to get started with AI", "users don't know what to type", "how do we show what the AI can do", "add some example prompts", or "improve onboarding to our AI feature". Wayfinders are: Initial CTA, Example Gallery, Suggestions, Templates, Nudges, Follow-ups, Prompt Details, and Randomize.
My name is Tommy. Im a Product designer and developer from Copenhagen, Denmark.
Connected with me on LinkedIn โ๏ธ