Framework

Superhuman's PMF Engine: How Rahul Vohra Made Product-Market Fit a Number

Rahul Vohra's framework for measuring and engineering product-market fit. The 40% rule, the high-expectations customer, and the four-step loop to actually move the score.

Origin: Rahul Vohra, 'How Superhuman Built an Engine to Find Product/Market Fit', First Round Review, 2018. Builds on Sean Ellis's earlier work in 2009 on the 'very disappointed' survey question.
When to use

Once you have at least 40 active users you can survey. The framework converts a vague concept (product-market fit) into a measurable score and a four-step loop you can run quarterly to push it upward.

How to apply Superhuman PMF Engine

  1. 1

    Send the survey: 'How would you feel if you could no longer use this product?'

    The core Sean Ellis question. Three answer options: very disappointed, somewhat disappointed, not disappointed. Send it to active users (not signups, not trialists). The percentage who say 'very disappointed' is your PMF score. Above 40% is the rough heuristic for product-market fit.

  2. 2

    Segment the respondents

    Split your respondents into two groups: those who said 'very disappointed' (your high-expectations customers, your fans) and those who said 'somewhat disappointed' (your on-the-fence users, the conversion opportunity). Ignore the 'not disappointed' group entirely. They are not your customer; chasing them dilutes your roadmap.

  3. 3

    Profile your high-expectations customers

    Look at the 'very disappointed' group. What do they have in common? Industry, role, team size, use case? This is your real ICP. Vohra discovered Superhuman's was tech founders, executives, and managers who lived in their inbox. Once you know who your fans are, you can stop building for everyone else.

  4. 4

    Identify what your fans love and what blocks the on-the-fence group

    Two follow-up questions in the survey. To everyone: 'What is the main benefit you receive from this product?' To the somewhat-disappointed group: 'What would you need to upgrade your answer to very disappointed?' The first answer tells you what to double down on; the second tells you what to fix. This is your product roadmap.

  5. 5

    Build the roadmap: half doubling-down, half fixing

    Vohra's exact split for Superhuman: roughly half the engineering effort went into deepening the things fans loved (speed, keyboard shortcuts, AI assistance), and roughly half went into fixing the blockers cited by on-the-fence users (mobile, calendar integration, attachments). Re-run the survey every quarter. The PMF score should rise. If it does not, your roadmap was wrong.

The problem the framework solves

“Product-market fit” is the most-cited concept in startup advice and the worst-defined. Most founders know they want it. Few can tell you whether they have it. Almost none can tell you whether they are getting closer to it month over month.

Marc Andreessen’s original 2007 framing was felt, not measured: “you can always feel product-market fit when it’s happening.” That is a great quote and a terrible operating tool. You cannot run a quarterly planning meeting on a feeling.

Rahul Vohra’s contribution, published in First Round Review in 2018, was to convert PMF into a number you can measure, segment, and engineer. Built on Sean Ellis’s “very disappointed” survey question (originally proposed in 2009), the Superhuman PMF Engine is now the canonical operational measure of product-market fit in modern SaaS.

The core question

Send a one-question survey to your active users:

“How would you feel if you could no longer use this product?”

  1. Very disappointed
  2. Somewhat disappointed
  3. Not disappointed (it isn’t really useful)

The percentage who answer “very disappointed” is your PMF score. Sean Ellis’s original heuristic: above 40% suggests product-market fit; below suggests you don’t yet have it.

That is a heuristic, not a law. Slack hit ~50%. Superhuman hit ~58% after running the engine. Some perfectly good companies plateau at 35%. Use the number as direction, not destiny.

What “active user” means

The single most common mistake is surveying the wrong people. Survey signups, trialists, or one-time users and your score will be artificially low. Survey only your evangelists and it will be artificially high.

Vohra’s working definition: active = completed the core product action in the last 14-28 days. For an email client, that means actively reading and sending email. For a project tool, actively creating or updating items. Pick the action that defines real usage of YOUR product, then survey only the people who did it recently.

The four segments

After running the survey, you have three groups:

SegmentWhat they tell youWhat to do
Very disappointedThese are your high-expectations customers, your fansProfile them. They are your real ICP.
Somewhat disappointedOn the fence. Could be converted.Read their open-text feedback. They are your roadmap.
Not disappointedNot your customer.Ignore. Building for them dilutes the experience for your fans.

The third group is the trap. Founders see those answers and think “we need to win them over.” Don’t. Vohra is explicit: those users are not your market. Trying to keep them costs you fan-experience focus and lowers the overall score.

Profile your fans

Look at everyone who said “very disappointed” and find the common pattern. For Superhuman it was: tech founders, executives, and managers who live in their inbox. Not “everyone who uses email.” A specific shape of person whose workflow makes Superhuman’s speed obsessions actually pay off.

Ask:

  • What industry are they in?
  • What role?
  • What team size?
  • What use case do they share?
  • What did they switch from?
  • How did they find you?

The answers narrow your ICP from a vague guess to a concrete profile. Now you know who to acquire more of, and who to ignore.

Build the roadmap: half/half

Two follow-up questions in the same survey:

  1. To everyone: “What is the main benefit you receive from this product?”
  2. To the somewhat-disappointed group: “What would you need to upgrade your answer to very disappointed?”

The first answer tells you what to double down on. (For Superhuman: speed, keyboard shortcuts, AI summaries.) The second tells you what to fix. (For Superhuman, at the time: mobile, calendar, attachments.)

Vohra’s split: roughly 50% of engineering effort on deepening the loved features, 50% on closing the on-the-fence blockers. Re-run the survey every quarter. The PMF score should rise. If it does not, the roadmap is wrong; iterate.

Why “very disappointed” beats NPS

NPS asks would-you-recommend. People can recommend a product they barely use. They cannot honestly say they would be very disappointed to lose a product they do not depend on. The PMF question demands the user has integrated the product into a workflow they would mourn losing. That is a structurally stronger signal than recommendation.

This is also why PMF score is harder to game. You can manufacture an NPS spike with a great support interaction. You cannot manufacture “very disappointed” without actually being load-bearing in the user’s day.

Common mistakes

1. Surveying the wrong audience. Active users only. Not signups, not trialists, not churned users.

2. Treating 40% as binary. It is a directional heuristic. Movement matters more than the absolute number.

3. Optimizing for the not-disappointed group. They are not your customer. Building for them dilutes the experience for your fans.

4. Running the survey once. Quarterly cadence or you cannot tell whether you are getting closer or further from PMF.

5. Confusing PMF score with NPS. Different questions, different signals. Use both, but don’t substitute.

ShipFit and the Superhuman PMF Engine

ShipFit treats the Superhuman PMF Engine as the canonical post-launch PMF measure. Stage 7 (Will They Pay?) defines pre-launch demand proof; once you have 40+ active users, the Engine becomes the loop you run to confirm whether early demand is hardening into PMF or leaking out the back. The ICP definition you locked at Stage 2 is what you segment the survey responses against — refining it for the “very disappointed” cohort is how you sharpen positioning over time.

You don’t get PMF by accident. You get it by measuring, segmenting, and pointing roadmap effort at the things that move the score.

Further reading

Common mistakes

  • Surveying the wrong audience. Surveying signups, trialists, or churned users dilutes the score with people who never engaged. Survey active users only, defined as users who completed the core action in the last 14 to 28 days.
  • Treating 40% as a binary pass/fail. The 40% threshold is a heuristic, not a law. Some excellent products plateau at 35%; some weak products spike to 45% in narrow segments. Use the score as a directional signal and pay attention to which way it's moving over time.
  • Acting on the 'not disappointed' segment. The biggest temptation is to build features for the unhappy users who churned. Don't. Vohra is explicit: those users are not your customer. Building for them dilutes the experience for your fans and lowers your overall score.
  • Running the survey once. The PMF Engine is a quarterly loop, not a one-time measurement. The score moves as you ship. If you don't measure quarterly, you cannot tell whether your roadmap is working.
  • Confusing PMF score with NPS. NPS asks 'how likely to recommend.' PMF asks 'how disappointed if you lost it.' The second is a stronger signal because it implies the user has integrated the product into their workflow. NPS is easier to game with one good interaction.

How ShipFit operationalizes this

ShipFit treats the Superhuman PMF Engine as the canonical post-launch PMF measure, referenced in Stage 7 (Will They Pay?) and on subsequent product iterations. The 'very disappointed' percentage gives you a number to move; segmenting by 'who would be very disappointed' refines the [ICP](/glossary/icp) you defined at Stage 2. Sahil Lavingia's framing matters: PMF is felt by a specific buyer segment, not the whole user base.

Part of a larger playbook

ShipFit runs 55 frameworks across 9 decision stages

Superhuman PMF Engine is one tool in a bigger toolkit. The full library covers market sizing, buyer discovery, MVP scoping, pricing, and launch.

shipfit.ai/frameworks
Frameworks Library
55 frameworks, mapped to 9 stages

The Mom Test

Q3

Rob Fitzpatrick

Validation question methodology — real interviews, not theater

Jobs-to-be-Done

Q2-Q4

Clayton Christensen

Functional, social, and emotional jobs your product fulfills

7 Powers

Q4

Hamilton Helmer

Strategic moats: Scale, Network, Counter-positioning, Switching, Brand, Cornered Resource, Process

Van Westendorp PSM

Q6

Feature-weighted price sensitivity analysis without guessing

Blue Ocean Strategy

Q4

Kim & Mauborgne

ERRC framework: Eliminate, Reduce, Raise, Create

Fake Door Testing

Q7

Pre-build behavioral validation with landing pages and apology modals

+ 49 more: TAM/SAM/SOM Analysis, Porter's Five Forces, Market Timing Analysis, Unit Economics (LTV/CAC)...

Frequently asked questions

What is Superhuman's PMF Engine?
A framework documented by Rahul Vohra in 2018 for measuring and engineering product-market fit. It uses Sean Ellis's 'very disappointed' survey question as the core metric, segments respondents into fans, on-the-fence, and not-the-customer, then builds a roadmap that half doubles down on what fans love and half fixes what blocks the on-the-fence group. Run quarterly, the loop should push the PMF score upward.
What is the 40% rule for PMF?
Sean Ellis's heuristic, popularized by Vohra: if 40% or more of your active users say they would be 'very disappointed' if they could no longer use your product, you have likely achieved product-market fit. Below 40%, you don't yet. The threshold is a directional signal, not a hard binary. What matters more is whether the number is rising over time as you ship.
Who do I survey for the PMF score?
Active users only. Define active as 'completed the core product action in the last 14 to 28 days.' Do not survey signups, trialists, churned users, or one-time users. Including them dilutes the signal. You want a clean read on people who are actually using the product and could lose it.
What's the difference between PMF score and NPS?
NPS asks 'how likely are you to recommend this product?' on a 0-10 scale. PMF score asks 'how would you feel if you could no longer use this product?' with three options. PMF is a stronger signal because it implies workflow integration, not just a positive moment. A user can recommend a product they barely use; a user cannot honestly say they'd be very disappointed to lose a product they don't depend on.
What if my PMF score is below 40%?
Don't panic. Use the framework to find out why. Profile your fans (whoever said 'very disappointed') to identify your real ICP. Then read the open-text responses from the somewhat-disappointed group to see what they need fixed. Build a roadmap that doubles down on fan benefits and fixes the on-the-fence blockers. Re-run quarterly. The score should rise.
Can the PMF Engine work for B2B?
Yes. Vohra built it for Superhuman, which is B2B-ish (sold to individuals at companies). It works equally well for B2B SaaS aimed at end-users. For pure enterprise B2B where the buyer is not the user, you need to survey the user (for product fit) AND the buyer (for budget fit) separately. Same framework, two surveys.
How does this differ from the Lean Startup's validation approach?
The Lean Startup defines validated learning conceptually but doesn't give you a single number to track. The Superhuman PMF Engine gives you a number (the 'very disappointed' percentage) and a quarterly loop to move it. They're complementary: Lean Startup is the discipline; Superhuman PMF Engine is the operationalization for measuring product-market fit specifically.
Related on ShipFit

Keep exploring

Master guide
Validate your business idea

The 9-step playbook from market verdict to ship-ready spec.

Framework
The Mom Test

The Mom Test is Rob Fitzpatrick's framework for customer interviews that generate real signal. Not praise. Three rules, applied step-by-step, with examples.

Framework
Van Westendorp Price Sensitivity Meter

The Van Westendorp framework uses 4 questions to surface a defensible price range for any product. Here's how to run it, interpret results, and avoid the cheapest mistakes.

Spoke
Market Research

Most founder market research is a TAM slide that nobody believes. The numbers that actually matter are smaller, harder to defend, and tell you whether the market exists for the ten-customer version of your business.

Spoke
Idea Validation

Most founders confuse idea validation with idea-receiving-encouragement. The two have nothing in common. Here's what real validation looks like, and the four methods that actually produce it.

Calculator
CAC / LTV ratio calculator

Does each customer make you money? Or cost you money?

Q&A
How do you validate a business idea?

Run nine framework-backed decisions in order before writing code: define the buyer, prove the pain is painful, name the winning angle, scope V1 to the smallest test of the hypothesis, get behavioral evidence (paid pre-orders, signed letters of intent, or credit cards on file from a Fake Door Test), then ship. Most failed startups skipped at least three of those nine. Plan to spend two to four weeks on this. It saves six to nine months of building the wrong thing.

For founders
indie hackers

For indie hackers who've wasted months on dead ideas. ShipFit forces 9 decisions before you write a line of code. Proven frameworks, exports to Cursor.

Comparison
Buildpad

If you want a conversation partner, Buildpad. If you want to stop researching and ship, ShipFit. Both solve different problems for different founders. Don't pick on hype.

Ready to make your next product a success?

9 decisions between your idea and a product worth building.

No credit card required.

Try an example: