Apply the Governors framework to design or audit human-in-the-loop features that keep users informed, in control, and safe as AI acts autonomously. Use this skill whenever someone is designing or reviewing AI product features involving oversight, trust, transparency, or control — including "how do I keep users in the loop", "how should I handle risky AI actions", "users don't trust the AI", "how do I prevent costly AI mistakes", "should I ask for confirmation before this action", "how do I show AI reasoning", "users are scared the AI will overwrite their data", "how do I handle AI memory and privacy", or any request about making an AI feature feel safe and controllable. Trigger even when the user doesn't say "governor" or "human-in-the-loop" — if they're designing any AI feature and the question touches on control, trust, transparency, cost, risk, or oversight, use this skill.
Governors are design patterns that keep users meaningfully in control as AI systems act on their behalf. They exist because autonomy without transparency erodes trust, and actions without oversight can cause irreversible harm.
When someone asks for guidance on AI oversight, transparency, or control:
Use this to select the right Governor(s) for the situation:
| Situation | Recommended Pattern(s) | |---|---| | AI will take a long or expensive action | Action Plan + Verification + Cost Estimates | | AI is acting autonomously in the background | Shared Vision + Stream of Thought + Controls | | User needs to verify AI's intent before a destructive action | Verification + Sample Response | | User is unsure what outcome they want yet | Variations + Branches + Draft Mode | | AI cites or summarizes external/internal sources | Citations + References | | AI remembers things across sessions | Memory | | Generation will consume significant compute/credits | Cost Estimates + Draft Mode | | AI acts in a multi-step workflow | Action Plan + Stream of Thought + Controls | | User wants to explore multiple directions without committing | Branches + Variations | | AI is running in agent/operator mode | Shared Vision + Verification + Stream of Thought | | User wants to preview before full output | Sample Response + Draft Mode |
What it is: AI lays out its intended steps before executing, giving users a chance to confirm or adjust.
Two modes:
Key design guidance:
Real examples: Replit pauses on a proposed sequence. Gamma generates an outline requiring confirmation. Cursor makes action plans optional in settings. Zapier shows a workflow outline in AI Drafting mode.
What it is: Multiple parallel paths of generation or exploration, each preserving the original context.
Three forms:
Key design guidance:
Real examples: ChatGPT "Branch in new chat", TypingMind forked threads, FloraFauna visual canvas branches, Rivet convergent workflow branches, Midjourney variant exploration.
What it is: Connections from generated output back to its underlying source material.
Four forms:
Key design guidance:
Real examples: Adobe Acrobat paragraph-level citations, Granola transcript quotes on hover, Perplexity numbered inline sources, Sana popover with highlighted source passages.
What it is: UI mechanisms that let users stop, pause, and manage AI actions in flight.
Common controls:
Key design guidance:
Real examples: Claude's stop button, ChatGPT skip-to-end in research mode, Replit task queue, Perplexity interrupt-with-context while running, Julius step-by-step workflow runs.
What it is: Transparent display of compute/credit costs before and during generation.
Key cost factors: model size, prompt/context length, expected output length, steps in a workflow chain, inference loops.
Credits vs. dollars: Technical users prefer dollar values; non-technical users benefit from product-specific credit systems, but these lack cross-product comparability.
Key design guidance:
Real examples: Adobe Firefly credits beside generate button, ElevenLabs cost shown from input box, Krea credit estimate in builder sidebar, Udio live cost update as user configures.
What it is: Lower-fidelity, lower-cost generation before committing to the full, expensive run.
By modality:
Explicit vs. implicit drafting:
Key design guidance:
What it is: AI retains and reuses information across sessions, creating continuity.
Three scopes:
Risk: Without clear controls, AI may misremember, overgeneralize, or accumulate incorrect details — a kind of "AI psychosis" from flawed recollections.
Key design guidance:
Real examples: ChatGPT inline memory capture notification, Gemini user-managed memory store.
What it is: The external materials AI retrieves and uses to shape output, made visible and manageable.
Three layouts:
Key design guidance:
Real examples: ChatGPT DeepResearch side panel, Perplexity inline citations + sources drawer, Notion hidden reference drawer, Glean nested sources in chat, Dia references hidden panel.
What it is: A full-quality, lightweight proof-of-concept output before committing to the full, costly run.
Context-appropriate samples: A single row in a table, a 30-second audio clip, a thumbnail image, a short paragraph before a full draft.
Key distinction from Draft Mode: A sample is full-quality on a small subset (confirms intent); Draft Mode is reduced-quality on the full scope (reduces cost).
Key design guidance:
Real examples: Notion's "try on this view" before full database autofill, Zapier single-record test before automation runs at scale.
What it is: Ambient affordances that let users passively monitor and intervene in AI activity without disrupting its flow.
Physical world analogies: Tesla Autopilot LED indicators, GM Super Cruise steering wheel colors — subtle cues about AI state that don't demand attention.
Key design guidance:
Real examples: Perplexity Comet tab glow when AI is active, ChatGPT Operator Mode live browser panel inside the conversation, Zapier human-in-the-loop workflow approval steps, Relay approval step types.
What it is: The visible trace of how the AI moved from input to answer — plans formed, tools called, decisions made.
Three expressions:
Key design guidance:
Real examples: ChatGPT inline reasoning trace, Perplexity steps tab above results, Lovable real-time action log, V0 inline logic then left-drawer build progress.
What it is: Multiple permutations of the AI's output for the user to compare and choose from.
Three forms:
Key design guidance:
Real examples: Adobe Firefly grid of variants with dropdown actions, Copy.ai inline variant selector, Writer.com preset transformations, FloraFauna canvas-based branching variations.
What it is: A required human approval step before the AI takes an action with meaningful negative consequences.
When verification is warranted (potential for real harm):
When verification is not warranted: Simple, low-risk, easily reversible tasks (running a search, drafting a message). Verification there creates prompt fatigue and becomes meaningless.
Types of verification:
Key design guidance:
Real examples: Cofounder agent "Always Ask" toggle with prominent red warning when disabled, Notion inline verification for data overwrites, Chronicle outline approval before presentation build, Replit action plan confirmation before build, Dovetail insight verification before research synthesis.
Governors work together. Common pairings:
The temptation is to add Governors everywhere. Resist it. Every confirmation adds friction, and prompt fatigue causes users to click through approvals reflexively — defeating their purpose.
Calibrate with two questions:
Apply the AI Identifiers framework to design or audit the distinct, brand-level qualities that define how an AI presents itself across a product. Use this skill whenever someone is designing or reviewing the visual, verbal, or behavioral identity of an AI — including questions like "what should we call our AI", "how should our AI look", "what color should we use for AI features", "how do we make our AI feel distinct", "what icons should represent AI actions", "how do we give our AI a personality", "should our AI have an avatar", or any request about making an AI feel coherent, recognizable, and on-brand. Also trigger when the user is building a new AI feature and hasn't yet thought about how it should present itself — proactively raising identifiers as a design consideration is part of this skill's job.
Design and implement AI input patterns for products. Use this skill whenever the user wants to add an AI-powered input mechanism to their product, improve how users interact with AI features, decide which input pattern fits a use case, or audit existing AI input UX. Trigger on phrases like "how should users prompt this", "add AI input to", "let users control the AI with", "what input pattern should I use", "design an AI prompt experience", "how do I let users fill fields with AI", "add a regenerate button", "inline AI actions", or any request about how users should interact with or direct AI in the product. Always use this skill before designing or recommending any AI interaction surface.
Apply AI Trust Builder design patterns to give users confidence that an AI product's results are ethical, accurate, and trustworthy. Use this skill whenever a designer, PM, or developer wants to make their AI product feel safer, more transparent, or more accountable. Trigger on: "make users feel safe", "add a disclaimer", "handle user data", "label AI-generated content", "privacy mode", "disclose AI is being used", "watermark AI outputs", "make the AI more transparent", "audit trail for AI", "user consent for recording", or any request touching AI accountability, privacy, explainability, or honest representation of what AI is doing. Also use when auditing an existing AI product for trust signals or when building new AI features into a non-AI-native product. Covers seven patterns: Caveat, Consent, Data Ownership, Disclosure, Footprints, Incognito Mode, and Watermark.
My name is Tommy. Im a Product designer and developer from Copenhagen, Denmark.
Connected with me on LinkedIn ✌️