Question

How do you find product-market fit?

How to find product-market fit using the Superhuman PMF Engine. Run the survey, profile your high-expectations customers, build the half-and-half roadmap, re-measure quarterly.

TL;DR

Use Rahul Vohra's Superhuman PMF Engine. (1) Survey active users with the Sean Ellis question 'how would you feel if you could no longer use this?'. (2) Segment respondents by 'very disappointed' / 'somewhat disappointed' / 'not disappointed'. (3) Profile your fans (the very disappointed) to find your real ICP. (4) Build a roadmap that's half doubling-down on what fans love and half closing the on-the-fence blockers. (5) Re-run quarterly. The score should rise.

The fast version

PMF doesn’t appear by accident. It appears by running the Superhuman PMF Engine, a four-step quarterly loop:

  1. Survey active users with one question: “How would you feel if you could no longer use this product?” Three options: very disappointed / somewhat disappointed / not disappointed.
  2. Segment respondents. The “very disappointed” are your fans (your real ICP). The “somewhat disappointed” are the conversion opportunity. The “not disappointed” are not your customer; ignore them.
  3. Profile your fans. What do they have in common? Industry, role, team size, use case? That’s your real ICP, narrower than your stated one.
  4. Build the half-and-half roadmap. Roughly 50% of engineering effort on deepening features your fans already love (you ask them: “what’s the main benefit?”). Roughly 50% on closing the on-the-fence blockers (you ask them: “what would you need to upgrade to ‘very disappointed’?”).

Re-run quarterly. The percentage of active users who answer “very disappointed” is your PMF score. 40%+ is the rough heuristic for likely PMF.

Why “very disappointed” beats every other PMF metric

NPS asks would-you-recommend. People can recommend a product they barely use. They cannot honestly say they would be very disappointed to lose a product they don’t depend on. The Sean Ellis question demands the user has integrated the product into a workflow they would mourn losing. That’s a structurally stronger signal than recommendation.

This is also why the PMF score is harder to game. You can manufacture an NPS spike with a great support interaction. You cannot manufacture “very disappointed” without actually being load-bearing in the user’s day.

Common mistakes

1. Surveying the wrong audience. Active users only. Not signups, not trialists, not churned users. Active = completed the core product action in the last 14-28 days.

2. Treating 40% as binary. It’s a directional heuristic. Movement matters more than the absolute number.

3. Optimizing for the not-disappointed group. They are not your customer. Building for them dilutes the experience for your fans and lowers your overall score.

4. Running the survey once. Quarterly cadence or you cannot tell whether you are getting closer to PMF or further.

5. Confusing PMF score with NPS. Different questions, different signals. Use both, but don’t substitute.

How ShipFit relates

ShipFit’s 9-step playbook is structured to maximize the probability of finding PMF before you commit engineering resources. Stages 1-4 identify a buyer segment with a real, painful problem and a defensible angle. Stages 5-7 scope the smallest product that could plausibly hit PMF for that segment. Stages 8-9 take the validated package and ship it.

After launch, the Superhuman PMF Engine is the canonical measurement loop. ShipFit’s stage 7 (Will They Pay?) gives you the pre-launch behavioral evidence that increases your PMF odds; the engine itself runs after you have 40+ active users.

Further reading

Related

Frequently asked questions

How long does it take to find PMF?
Median time from launch to claimed PMF in successful companies is 12-24 months. Rapid PMF (under 6 months) is rare and usually indicates lucky market timing. Slow PMF (3+ years) is also possible, especially in B2B with long sales cycles. What matters more than the absolute timeline is whether your PMF score is rising quarter-over-quarter. If it's flat or falling, you don't have a timeline problem; you have a fit problem.
Can I find PMF before launch?
Pre-launch you can find evidence of *probable* PMF via behavioral commitments (paid pre-orders, signed LOIs, Gold-tier Fake Door conversions). True PMF is post-launch because the Sean Ellis test requires real users who could lose the product. Pre-launch validation reduces the risk of NOT finding PMF; it doesn't substitute for the post-launch measurement.
What if my PMF score is below 40%?
Don't panic; most products are below 40% for their first year. Use the framework: profile your fans (whoever said 'very disappointed') to find your real ICP. Read the open-text from the somewhat-disappointed group to see what blocks them. Build a roadmap that doubles down on fan benefits and fixes the on-the-fence blockers. Re-run the survey quarterly. The score should rise.
Should I survey churned users too?
No. Survey active users only (defined as users who completed the core action in the last 14-28 days). Churned users have already left; their answers don't reflect current usage and dilute the score. The Sean Ellis test is specifically about the people who would lose something they currently use. Churned users don't have that loss aversion to anchor against.
Is hitting 40% enough?
It's the threshold for likely PMF; it's not the finish line. Companies that found PMF and stagnated (Quibi, several SaaS that hit 40% then plateaued) usually stopped running the engine after the first positive read. The engine is a quarterly loop, not a one-time measurement. Keep measuring; keep moving the score up. Superhuman went from 22% to 58% running this loop continuously.
Related on ShipFit

Keep exploring

Master guide
Validate your business idea

The 9-step playbook from market verdict to ship-ready spec.

Framework
Superhuman PMF Engine

Rahul Vohra's framework for measuring and engineering product-market fit. The 40% rule, the high-expectations customer, and the four-step loop to actually move the score.

Framework
The Mom Test

The Mom Test is Rob Fitzpatrick's framework for customer interviews that generate real signal. Not praise. Three rules, applied step-by-step, with examples.

Spoke
Idea Validation

Most founders confuse idea validation with idea-receiving-encouragement. The two have nothing in common. Here's what real validation looks like, and the four methods that actually produce it.

Spoke
Market Research

Most founder market research is a TAM slide that nobody believes. The numbers that actually matter are smaller, harder to defend, and tell you whether the market exists for the ten-customer version of your business.

Calculator
CAC / LTV ratio calculator

Does each customer make you money? Or cost you money?

Q&A
How do you validate a business idea?

Run nine framework-backed decisions in order before writing code: define the buyer, prove the pain is painful, name the winning angle, scope V1 to the smallest test of the hypothesis, get behavioral evidence (paid pre-orders, signed letters of intent, or credit cards on file from a Fake Door Test), then ship. Most failed startups skipped at least three of those nine. Plan to spend two to four weeks on this. It saves six to nine months of building the wrong thing.

For founders
indie hackers

For indie hackers who've wasted months on dead ideas. ShipFit forces 9 decisions before you write a line of code. Proven frameworks, exports to Cursor.

Comparison
Buildpad

If you want a conversation partner, Buildpad. If you want to stop researching and ship, ShipFit. Both solve different problems for different founders. Don't pick on hype.

Ready to make your next product a success?

9 decisions between your idea and a product worth building.

No credit card required.

Try an example: