Skip to main content
Prompt library

AI prompts for product managers, founders, and researchers

Battle-tested prompts for Claude, ChatGPT, Gemini, or any modern LLM. Every one has a specific job — validate an idea, run customer discovery, analyze a PMF survey, ask better interview follow-ups, or make sense of hundreds of open-ended responses. Copy, paste, replace the brackets, run.

Why we publish these: AskDeeper is an AI research platform. We use these prompts internally. Sharing them is cheap for us and genuinely useful for you — no LLM can talk to your users, so there is no conflict.

Prompt 01

Validate a startup idea

Turn your idea into three cheapest disconfirming experiments + five Mom-Test-style customer questions. Finds the fastest way to prove your idea wrong.

When to use: When you have a hypothesis but no proof yet. Use before you raise, pre-build, or pick a segment.
The prompt
You are a senior venture analyst who has reviewed thousands of startup ideas at YC, a16z, and as a solo founder. Your job is NOT to encourage — your job is to find the fastest way to disprove this idea.

I'm considering building: [PRODUCT/IDEA in 1–3 sentences]

My target customer: [SEGMENT — be specific about role, company size, stage]

My key assumption: [What must be true for this to work]

Give me:

1. THE CORE ASSUMPTION RESTATEMENT
   Rewrite my core hypothesis in the form "X people will do Y because Z" — founders often hide their real assumption behind jargon.

2. THREE CHEAPEST DISCONFIRMING EXPERIMENTS
   For each: what to do today (not this week), what a "fails my hypothesis" signal looks like, and total time cost. Do not suggest "build an MVP" — that's not cheap.

3. FIVE CUSTOMER INTERVIEW QUESTIONS (MOM TEST STYLE)
   Questions about past behavior, not future intent. Each question should have a failure mode — what a "this hypothesis is wrong" answer sounds like.

4. FIVE SIGNS I'M FOOLING MYSELF
   The specific emotional or confirmation-bias patterns you expect me to fall into given this idea. Be blunt.

5. ONE REASON THIS IDEA COULD ACTUALLY WORK
   Steelman it — only after the disconfirming exercise. One sentence.

Rules:
- No "it depends" answers. Commit to a specific recommendation.
- If my idea pattern-matches a common graveyard (e.g., "AI for X that doesn't need AI"), say so directly.
- Do not suggest raising money or building a "waitlist landing page" as validation. Neither validates demand.

Why this prompt works

  • Role assignment is hostile, not helpful — prevents the LLM from being sycophantic about your idea.
  • Forces a specific hypothesis restatement — exposes vague thinking.
  • Asks for disconfirming experiments, not confirming ones — most founders default to the opposite.
  • Mom Test anchoring on question 3 prevents "would you buy this?" hypotheticals.
  • The final steelman is placed AFTER the skeptic exercise so it does not poison the critique.
Prompt 02

Run a customer discovery interview

Generates a 15-question interview guide organized by context, behavior, pain, alternatives, and willingness to pay. Every question tagged with why it works.

When to use: Before you build anything. When you have a segment and a problem hypothesis, and you need to talk to 10–20 people.
The prompt
You are a senior product researcher who has run 1000+ customer discovery interviews for seed-stage startups. You follow Rob Fitzpatrick's Mom Test principles religiously — past behavior, not future intent.

PRODUCT CONTEXT:
- What I'm building: [1-sentence description]
- My target segment: [be specific — role, company size, stage, geography if relevant]
- My current hypothesis: [the problem I think they have]
- My stage: [pre-idea / prototype / beta / launched]

Generate a customer discovery interview guide with:

1. 2-MINUTE OPENING SCRIPT (exact words)
   Include permission-to-record phrasing, expectation-setting, and a clear signal that this is NOT a sales call.

2. 15 QUESTIONS ACROSS 5 CATEGORIES
   - Context (3): Who, what, when — understand their situation
   - Behavior (4): The last time they did [relevant task] — past, specific, factual
   - Pain (3): Where is the friction — avoid "are you frustrated by"
   - Alternatives (3): What they have tried, why they abandoned it
   - Willingness to pay (2): Anchor on behavior, not hypothesis

   For EACH question: (a) the question itself, (b) a one-line "why this works".

3. THREE RED FLAGS DURING THE INTERVIEW
   Signs that the respondent is telling me what I want to hear, not the truth.

4. KEY DISCONFIRMING EVIDENCE
   The three answers that, if I hear them, should make me pivot — not double down.

Rules:
- No yes/no questions. Every question is "walk me through" or "tell me about".
- No hypothetical "would you pay for X". Ask about past payments.
- Questions must work for ASYNC interviews (typed responses), not just live calls.

Why this prompt works

  • Explicit Mom Test anchoring keeps the model from drifting into hypothetical or leading questions.
  • The 5-category split forces balance — founders over-index on pain and under-ask about alternatives and willingness to pay.
  • The "async" constraint in the rules prevents the output from assuming a live call, which matters if you use AskDeeper or any async research tool.
  • Asking for disconfirming evidence upfront makes it impossible to later read the transcripts through a bias lens.
Prompt 03

Analyze a PMF survey (Sean Ellis 40% test)

Takes your Ellis survey results + open-ended responses and produces segment clustering, action plans, and a contrarian read. Goes beyond the 40% number.

When to use: After you have 40+ active users and have run the "how disappointed would you be" survey. Especially useful when the result is ambiguous (20–40% disappointed).
The prompt
You are a senior growth analyst who helped Superhuman, Dropbox, and LogMeIn interpret their Sean Ellis 40% test results. You are famous for saying "40% is the start of the analysis, not the answer."

DATA:
I ran the Sean Ellis PMF survey on [N] of my active users who [usage criteria].

Question 1 response breakdown (How disappointed would you be without [product]?):
- Very disappointed: [X]%
- Somewhat disappointed: [Y]%
- Not disappointed: [Z]%

[Paste selected open-ended responses from the "very disappointed" cohort — at least 10]

ANALYSIS I NEED:

1. VERDICT
   Is this PMF, pre-PMF, or no-PMF signal? State confidence level (high / medium / low).

2. SEGMENT CLUSTERING
   Group the "very disappointed" respondents by the JOB they are trying to do. Name each segment in 3–4 words and give % of cohort.

3. THE "WHY" BEHIND THE 40% RULE
   Read the open-ended "main benefit" responses. What specific benefit do they cite MOST? Name the top 3 themes with representative quotes.

4. ACTION PLAN BY SEGMENT
   For the TOP segment by size AND the TOP segment by enthusiasm (might be different):
   - Positioning copy (1 line)
   - Channel to double down on (and why)
   - Feature to NOT build (likely distractions)

5. THE CONTRARIAN READ
   What would a skeptic say these responses do NOT prove? Be honest.

Rules:
- Do not sugar-coat. If the signal is weak, say so directly.
- If under 40%, do not just recommend "talk to more users" — identify which segment to concentrate on.
- Name patterns explicitly. Do not hide behind "there is some evidence of".

Why this prompt works

  • The role assignment ("40% is the start, not the answer") forces the LLM past the naive interpretation most blog posts give.
  • Segment-by-JTBD analysis catches the common mistake of treating all "very disappointed" users as a single cohort — they rarely are.
  • The contrarian read at the end is the most underrated part. It surfaces where the data is not strong enough to act on.
  • Separating "top by size" from "top by enthusiasm" prevents chasing a loud minority or ignoring a quiet majority.
Prompt 04

Generate better interview follow-up questions

Takes a respondent answer and generates 3 follow-ups that probe the specific, test for contradictions, and expand context. The "dig deeper" prompt.

When to use: Mid-interview, when someone gives you a vague or abstract answer. Or use it to prepare a probing guide in advance.
The prompt
You are the most insightful product researcher I know. Your superpower is asking follow-up questions that get people to tell you what they ACTUALLY mean, not what they first said.

CONTEXT:
I'm interviewing [segment] about [topic].
Their original answer: "[paste response here]"

Generate 3 follow-up questions that:

1. PROBE THE SPECIFIC
   Turn the abstract into concrete. If they said "it's slow", ask about the last time it was slow — what were they doing, what was the deadline, how did they feel in that moment.

2. TEST FOR CONTRADICTIONS
   Sometimes people say one thing and do another. Identify the implicit claim in their answer and ask about past behavior that would test it.

3. EXPAND THE CONTEXT
   One step back — what triggered the situation they're describing? What did they try BEFORE this came up?

For each follow-up:
- The exact question wording
- What I'm hoping to learn (1 line)
- What response would SURPRISE me (and thus update my model)

Rules:
- No "why do you feel that way" — too abstract, people rationalize.
- Dig for ONE specific episode, not patterns in general.
- If their answer is already very specific, push on the stakes: "what happened next?" or "what changed after that?"

Why this prompt works

  • The three-mode structure (specific / contradiction / context) covers most failure modes in interviews.
  • Asking for "what would SURPRISE me" is the secret sauce — it forces the LLM to generate questions that could update your mental model, not confirm it.
  • Banning "why do you feel that way" dodges the classic rationalization trap where respondents invent reasons post hoc.
Prompt 05

Analyze 100+ open-ended survey responses

Extracts theme clusters, unexpected signal, dissent patterns, and action-ranked recommendations from a pile of free-text responses. No hallucinated themes.

When to use: Whenever you have more open-ended responses than you can read in 20 minutes. Works well for NPS follow-ups, churn surveys, cancellation forms, open-ended PMF questions.
The prompt
You are a qualitative researcher who has coded 10,000+ interview transcripts and is allergic to "hallucinated themes." Your job is to extract what is ACTUALLY there, not what is convenient.

DATA:
[Paste 30–500 open-ended responses — one per line or as CSV]

CONTEXT:
- What I asked: [the exact question]
- Who responded: [segment description]
- What decision I'm trying to make: [e.g., "what feature to build next", "how to position this product"]

DELIVER:

1. THEME CLUSTERS
   Top 7–10 themes. For each:
   - Theme name (3–5 words)
   - % of responses that mention it
   - 3 representative verbatim quotes (unedited)
   - Sentiment: positive / negative / mixed / neutral

2. UNEXPECTED SIGNAL
   Themes that appear in <10% of responses but are STRUCTURALLY interesting (e.g., a specific use case, a competitor mentioned, a compliance concern). Flag these separately — they often matter more than majority patterns for product strategy.

3. DISSENT MAP
   Are there responses that directly contradict the majority view? Summarize the opposing camp.

4. ACTION-RANKED RECOMMENDATIONS
   Given my decision goal, rank the top 3 themes by: (a) size, (b) addressability, (c) differentiation.

5. WHAT'S MISSING
   What themes did you EXPECT to see that are absent? This is often the most useful insight.

Rules:
- Only use quotes from the actual data. No "a respondent might say..." paraphrasing.
- If a theme only appears in 2–3 responses, do not invent a category for it. Call it an outlier.
- Do not editorialize — extract patterns, do not advocate.

Why this prompt works

  • The "allergic to hallucinated themes" role assignment is the most important guard. LLMs will absolutely invent themes that feel plausible. Naming this constraint upfront cuts the failure rate significantly.
  • The "what is missing" section catches absence-of-signal — which is often the real insight. If nobody mentions the competitor you are worried about, that is data.
  • Action-ranked recommendations tied to YOUR decision goal stop the LLM from producing a generic "summary" and force prioritization.
  • Requiring verbatim quotes (not paraphrased) lets you trust-but-verify without re-reading all responses.
FAQ

Questions people ask about these prompts

What are AI prompts for product managers?

AI prompts for product managers are reusable instructions you give to an LLM (Claude, ChatGPT) to get consistent, high-quality output for common research and product tasks — idea validation, customer discovery, survey analysis, and so on. A good prompt combines role assignment, structured output, explicit rules, and guardrails against common failure modes like sycophancy or hallucinated themes.

Do these prompts work with Claude, ChatGPT, Gemini, and other LLMs?

Yes. Every prompt here has been tested against Claude Sonnet 4.6, GPT-4, and Gemini 2.5. The structure (role, inputs, deliverable format, rules) is model-agnostic and works with Grok, DeepSeek, and Llama-based models too. Claude tends to follow the rules most strictly; ChatGPT is slightly more creative but more likely to soften critiques; Gemini is strong at structured output.

Are these the actual prompts AskDeeper uses internally?

These are adapted from the prompt engineering we use inside AskDeeper for async AI interviews and theme extraction. The library versions are tuned for solo use in any chat LLM. The in-product versions run against real respondent data with additional guardrails and tuned models.

How do I adapt a prompt to my specific product?

Replace the bracketed placeholders ([PRODUCT], [SEGMENT], [HYPOTHESIS]) with your own content. For the first run, err on the side of more specificity — "senior engineering managers at 500–2000-person SaaS companies" beats "tech users." If the model asks clarifying questions, that is usually a sign your input is too vague.

What is the difference between brainstorming with an LLM and running a real research study?

An LLM is great for planning, structuring, and analyzing — essentially the solo thinking work. It cannot talk to your users. Running a real study means 10–500 actual respondents answering your questions, with adaptive follow-ups, recruiting, and synthesis. Claude, ChatGPT, or Gemini can prepare you for that. AskDeeper runs the study itself async.

LLMs plan. They can't interview your users.

Every prompt here is a solo exercise — you think, the LLM helps. For anything that requires real respondents, you still need to talk to them. AskDeeper runs async AI interviews with your actual users, using the same prompt-engineering principles you see in this library — Mom Test anchoring, adaptive follow-ups, verbatim quotes, theme extraction with no hallucinations.