Once you have at least 40 active users you can survey. The framework converts a vague concept (product-market fit) into a measurable score and a four-step loop you can run quarterly to push it upward.
How to apply Superhuman PMF Engine
- 1
Send the survey: 'How would you feel if you could no longer use this product?'
The core Sean Ellis question. Three answer options: very disappointed, somewhat disappointed, not disappointed. Send it to active users (not signups, not trialists). The percentage who say 'very disappointed' is your PMF score. Above 40% is the rough heuristic for product-market fit.
- 2
Segment the respondents
Split your respondents into two groups: those who said 'very disappointed' (your high-expectations customers, your fans) and those who said 'somewhat disappointed' (your on-the-fence users, the conversion opportunity). Ignore the 'not disappointed' group entirely. They are not your customer; chasing them dilutes your roadmap.
- 3
Profile your high-expectations customers
Look at the 'very disappointed' group. What do they have in common? Industry, role, team size, use case? This is your real ICP. Vohra discovered Superhuman's was tech founders, executives, and managers who lived in their inbox. Once you know who your fans are, you can stop building for everyone else.
- 4
Identify what your fans love and what blocks the on-the-fence group
Two follow-up questions in the survey. To everyone: 'What is the main benefit you receive from this product?' To the somewhat-disappointed group: 'What would you need to upgrade your answer to very disappointed?' The first answer tells you what to double down on; the second tells you what to fix. This is your product roadmap.
- 5
Build the roadmap: half doubling-down, half fixing
Vohra's exact split for Superhuman: roughly half the engineering effort went into deepening the things fans loved (speed, keyboard shortcuts, AI assistance), and roughly half went into fixing the blockers cited by on-the-fence users (mobile, calendar integration, attachments). Re-run the survey every quarter. The PMF score should rise. If it does not, your roadmap was wrong.
The problem the framework solves
“Product-market fit” is the most-cited concept in startup advice and the worst-defined. Most founders know they want it. Few can tell you whether they have it. Almost none can tell you whether they are getting closer to it month over month.
Marc Andreessen’s original 2007 framing was felt, not measured: “you can always feel product-market fit when it’s happening.” That is a great quote and a terrible operating tool. You cannot run a quarterly planning meeting on a feeling.
Rahul Vohra’s contribution, published in First Round Review in 2018, was to convert PMF into a number you can measure, segment, and engineer. Built on Sean Ellis’s “very disappointed” survey question (originally proposed in 2009), the Superhuman PMF Engine is now the canonical operational measure of product-market fit in modern SaaS.
The core question
Send a one-question survey to your active users:
“How would you feel if you could no longer use this product?”
- Very disappointed
- Somewhat disappointed
- Not disappointed (it isn’t really useful)
The percentage who answer “very disappointed” is your PMF score. Sean Ellis’s original heuristic: above 40% suggests product-market fit; below suggests you don’t yet have it.
That is a heuristic, not a law. Slack hit ~50%. Superhuman hit ~58% after running the engine. Some perfectly good companies plateau at 35%. Use the number as direction, not destiny.
What “active user” means
The single most common mistake is surveying the wrong people. Survey signups, trialists, or one-time users and your score will be artificially low. Survey only your evangelists and it will be artificially high.
Vohra’s working definition: active = completed the core product action in the last 14-28 days. For an email client, that means actively reading and sending email. For a project tool, actively creating or updating items. Pick the action that defines real usage of YOUR product, then survey only the people who did it recently.
The four segments
After running the survey, you have three groups:
| Segment | What they tell you | What to do |
|---|---|---|
| Very disappointed | These are your high-expectations customers, your fans | Profile them. They are your real ICP. |
| Somewhat disappointed | On the fence. Could be converted. | Read their open-text feedback. They are your roadmap. |
| Not disappointed | Not your customer. | Ignore. Building for them dilutes the experience for your fans. |
The third group is the trap. Founders see those answers and think “we need to win them over.” Don’t. Vohra is explicit: those users are not your market. Trying to keep them costs you fan-experience focus and lowers the overall score.
Profile your fans
Look at everyone who said “very disappointed” and find the common pattern. For Superhuman it was: tech founders, executives, and managers who live in their inbox. Not “everyone who uses email.” A specific shape of person whose workflow makes Superhuman’s speed obsessions actually pay off.
Ask:
- What industry are they in?
- What role?
- What team size?
- What use case do they share?
- What did they switch from?
- How did they find you?
The answers narrow your ICP from a vague guess to a concrete profile. Now you know who to acquire more of, and who to ignore.
Build the roadmap: half/half
Two follow-up questions in the same survey:
- To everyone: “What is the main benefit you receive from this product?”
- To the somewhat-disappointed group: “What would you need to upgrade your answer to very disappointed?”
The first answer tells you what to double down on. (For Superhuman: speed, keyboard shortcuts, AI summaries.) The second tells you what to fix. (For Superhuman, at the time: mobile, calendar, attachments.)
Vohra’s split: roughly 50% of engineering effort on deepening the loved features, 50% on closing the on-the-fence blockers. Re-run the survey every quarter. The PMF score should rise. If it does not, the roadmap is wrong; iterate.
Why “very disappointed” beats NPS
NPS asks would-you-recommend. People can recommend a product they barely use. They cannot honestly say they would be very disappointed to lose a product they do not depend on. The PMF question demands the user has integrated the product into a workflow they would mourn losing. That is a structurally stronger signal than recommendation.
This is also why PMF score is harder to game. You can manufacture an NPS spike with a great support interaction. You cannot manufacture “very disappointed” without actually being load-bearing in the user’s day.
Common mistakes
1. Surveying the wrong audience. Active users only. Not signups, not trialists, not churned users.
2. Treating 40% as binary. It is a directional heuristic. Movement matters more than the absolute number.
3. Optimizing for the not-disappointed group. They are not your customer. Building for them dilutes the experience for your fans.
4. Running the survey once. Quarterly cadence or you cannot tell whether you are getting closer or further from PMF.
5. Confusing PMF score with NPS. Different questions, different signals. Use both, but don’t substitute.
ShipFit and the Superhuman PMF Engine
ShipFit treats the Superhuman PMF Engine as the canonical post-launch PMF measure. Stage 7 (Will They Pay?) defines pre-launch demand proof; once you have 40+ active users, the Engine becomes the loop you run to confirm whether early demand is hardening into PMF or leaking out the back. The ICP definition you locked at Stage 2 is what you segment the survey responses against — refining it for the “very disappointed” cohort is how you sharpen positioning over time.
You don’t get PMF by accident. You get it by measuring, segmenting, and pointing roadmap effort at the things that move the score.
Further reading
- Rahul Vohra, “How Superhuman Built an Engine to Find Product/Market Fit”, First Round Review, 2018. The original article. ~30 min read, fully worked example.
- Sean Ellis, “The startup pyramid”, 2009. Where the “very disappointed” question first appeared.
- Product-Market Fit glossary entry. Short definition + Andreessen’s 2007 framing.
- Lean Startup validation framework. Complementary discipline for validated learning.
- Buyer Persona Canvas. What to do with the high-expectations customer profile once you have it.
Common mistakes
- Surveying the wrong audience. Surveying signups, trialists, or churned users dilutes the score with people who never engaged. Survey active users only, defined as users who completed the core action in the last 14 to 28 days.
- Treating 40% as a binary pass/fail. The 40% threshold is a heuristic, not a law. Some excellent products plateau at 35%; some weak products spike to 45% in narrow segments. Use the score as a directional signal and pay attention to which way it's moving over time.
- Acting on the 'not disappointed' segment. The biggest temptation is to build features for the unhappy users who churned. Don't. Vohra is explicit: those users are not your customer. Building for them dilutes the experience for your fans and lowers your overall score.
- Running the survey once. The PMF Engine is a quarterly loop, not a one-time measurement. The score moves as you ship. If you don't measure quarterly, you cannot tell whether your roadmap is working.
- Confusing PMF score with NPS. NPS asks 'how likely to recommend.' PMF asks 'how disappointed if you lost it.' The second is a stronger signal because it implies the user has integrated the product into their workflow. NPS is easier to game with one good interaction.
How ShipFit operationalizes this
ShipFit treats the Superhuman PMF Engine as the canonical post-launch PMF measure, referenced in Stage 7 (Will They Pay?) and on subsequent product iterations. The 'very disappointed' percentage gives you a number to move; segmenting by 'who would be very disappointed' refines the [ICP](/glossary/icp) you defined at Stage 2. Sahil Lavingia's framing matters: PMF is felt by a specific buyer segment, not the whole user base.
ShipFit runs 55 frameworks across 9 decision stages
Superhuman PMF Engine is one tool in a bigger toolkit. The full library covers market sizing, buyer discovery, MVP scoping, pricing, and launch.
The Mom Test
Q3Rob Fitzpatrick
Validation question methodology — real interviews, not theater
Jobs-to-be-Done
Q2-Q4Clayton Christensen
Functional, social, and emotional jobs your product fulfills
7 Powers
Q4Hamilton Helmer
Strategic moats: Scale, Network, Counter-positioning, Switching, Brand, Cornered Resource, Process
Van Westendorp PSM
Q6Feature-weighted price sensitivity analysis without guessing
Blue Ocean Strategy
Q4Kim & Mauborgne
ERRC framework: Eliminate, Reduce, Raise, Create
Fake Door Testing
Q7Pre-build behavioral validation with landing pages and apology modals
+ 49 more: TAM/SAM/SOM Analysis, Porter's Five Forces, Market Timing Analysis, Unit Economics (LTV/CAC)...
Frequently asked questions
What is Superhuman's PMF Engine?
What is the 40% rule for PMF?
Who do I survey for the PMF score?
What's the difference between PMF score and NPS?
What if my PMF score is below 40%?
Can the PMF Engine work for B2B?
How does this differ from the Lean Startup's validation approach?
Keep exploring
The 9-step playbook from market verdict to ship-ready spec.
The Mom Test is Rob Fitzpatrick's framework for customer interviews that generate real signal. Not praise. Three rules, applied step-by-step, with examples.
The Van Westendorp framework uses 4 questions to surface a defensible price range for any product. Here's how to run it, interpret results, and avoid the cheapest mistakes.
Most founder market research is a TAM slide that nobody believes. The numbers that actually matter are smaller, harder to defend, and tell you whether the market exists for the ten-customer version of your business.
Most founders confuse idea validation with idea-receiving-encouragement. The two have nothing in common. Here's what real validation looks like, and the four methods that actually produce it.
Does each customer make you money? Or cost you money?
Run nine framework-backed decisions in order before writing code: define the buyer, prove the pain is painful, name the winning angle, scope V1 to the smallest test of the hypothesis, get behavioral evidence (paid pre-orders, signed letters of intent, or credit cards on file from a Fake Door Test), then ship. Most failed startups skipped at least three of those nine. Plan to spend two to four weeks on this. It saves six to nine months of building the wrong thing.
For indie hackers who've wasted months on dead ideas. ShipFit forces 9 decisions before you write a line of code. Proven frameworks, exports to Cursor.
If you want a conversation partner, Buildpad. If you want to stop researching and ship, ShipFit. Both solve different problems for different founders. Don't pick on hype.
Ready to make your next product a success?
9 decisions between your idea and a product worth building.