Spoke 4 of 7

MVP Scope: How to Define V1 Without Falling for the Feature Trap

Most founders ship an MVP that's actually V1.3 with bugs. Real MVP scoping cuts ruthlessly until you can name the one hypothesis V1 proves, and ships a product that tests it.

The MVP that wasn’t

The most expensive mistake in startup history isn’t building the wrong product. It’s building the right product but at version 1.3 instead of version 1.

The founder who knows what to build but ships the polished, feature-complete, “ready” version of it 6 months late has lost the same battle as the founder who built the wrong thing. They’ve burned the runway, they’ve missed the market timing, and they’ve never tested the hypothesis that V1 was supposed to test.

This guide is the scoping discipline that prevents that.

The one rule of MVP scoping

For every feature you’re considering for V1, ask: “If we ship without this, does the hypothesis still get tested?”

If the answer is yes, the feature isn’t Must. It’s Should-Have or Could-Have. It belongs in V2 or V3, not V1.

If the answer is no. Without this feature, we genuinely cannot tell whether buyers will pay. The feature is Must. It ships in V1.

The trap is that founders feel everything is a Must because everything sounds useful. “Users will want X” is true for almost every X. The relevant question isn’t “would users want this?” It’s “does the absence of this prevent us from testing whether the business works?”

The MoSCoW gate

Dai Clegg’s 1994 MoSCoW framework gives you four buckets:

  1. 1
    Bucket 1

    Must

    V1 cannot ship without this.

    The hypothesis cannot be tested without this feature. For a typical SaaS MVP the Must list should hold 3-5 features. If you have 8+ Musts, you have already failed at MoSCoW; revisit each one with the "is this required to test the hypothesis?" gate.

  2. 2
    Bucket 2

    Should

    V1 is meaningfully worse without this, but the hypothesis can still be tested.

    Items here are V1.5 candidates. Ship them if effort is low; otherwise hold for the next release.

  3. 3
    Bucket 3

    Could

    Nice to have, doesn't affect the hypothesis test.

    Default to V2. The cost of building Could-haves in V1 isn't the build time; it's the dilution of focus on the Musts.

  4. 4
    Bucket 4 · Most overlooked

    Won't

    Explicitly out of scope for V1.

    Naming what you are explicitly choosing not to build is half the discipline. Most teams skip this bucket and end up building Should-haves they could have skipped, then ship V1 late and confused.

The impact-effort matrix

Once you have a candidate Must list, score each feature on two axes:

Impact (1-10): how much does this feature move the hypothesis test? Effort (1-10): how many engineer-days to build it?

Plot them:

Effort \ ImpactHigh impactLow impact
Low effortV1 (build first)Skip or V3
High effortV1 only if hypothesis-blocking; else V2V4 / never

The “high effort, low impact” quadrant is where most MVPs go to die. Engineers love these features because they’re technically interesting. Founders sign off because the feature “felt important.” Six months later, V1 ships with a beautifully-architected feature that didn’t move the conversion needle.

The fix: be brutal about the impact score. If you can’t articulate in one sentence how the feature changes the hypothesis test, the impact is below 5.

The ICE-scoring version

ICE scoring is the numerical version of the matrix:

ICE = Impact × Confidence × Ease

For each feature:

  • Impact: how much does this move the metric (1-10)
  • Confidence: how sure are you about the impact (1-10)
  • Ease: how easy to build (1-10, where 10 is easiest)

Multiply. Rank. Cut the bottom 60%. The remaining 40% is V1.

The gotcha: founders inflate Confidence for pet projects. Run ICE with a co-founder or a brutal advisor. If Confidence is 9 but you can’t name three pieces of evidence supporting it, the real Confidence is 4.

What to polish vs. leave rough

V1 has finite polish budget. Spend it where the buyer’s first impression forms:

Polish:

  • Landing page (this is where conversion is won or lost)
  • Signup flow
  • The one core “aha moment” interaction
  • Pricing page

Leave rough (and the world won’t end):

  • Admin / settings pages
  • Edge-case error states
  • Integrations beyond the primary 1-2
  • Mobile responsiveness (if your buyer is desktop-first)
  • Onboarding email sequences beyond the first welcome

The mistake most founders make: they polish the admin dashboard their internal team uses and ship a confusing signup flow. The signup flow loses 60% of would-be converts on the first session. The admin dashboard nobody sees doesn’t matter.

Scope creep. Three early warning signs

Once V1 scope is locked, watch for these:

  1. The Must list grows during the build. This is fatal. V1 scope should ONLY shrink (when you discover something is harder than estimated and you cut it), never grow. New great ideas go to a V2 file, not into V1.

  2. Sprint reviews include “we also did X.” Engineers building features that weren’t planned is a process problem. Catch it in the first sprint and don’t let it pattern-set.

  3. Launch date moves back by 1-2 weeks every 2 weeks. This is the death spiral. Each delay enables more “while we’re at it” additions, which cause more delays. Break the spiral by hard-locking the launch date and cutting features instead.

The Lean Startup connection

Eric Ries’s Lean Startup framework (2011) introduced the “build-measure-learn” loop. The MVP is the smallest build that enables one measure-learn cycle.

The two most-cited MVP examples in the book are deliberately tiny:

  • Dropbox. Drew Houston’s MVP was a 3-minute video explaining what the product would do. It tested whether the demand was real (it was. The beta waitlist exploded). No actual file-sync product was built for the MVP test.
  • Zappos. Nick Swinmurn’s MVP was buying shoes from local stores when orders came in. It tested whether people would buy shoes online (they would). No inventory was built for the MVP test.

Both validated the hypothesis at near-zero engineering cost. Both were embarrassingly small. Both worked.

The lesson isn’t to literally build a video as your MVP (the bar is higher now). The lesson is: name the hypothesis V1 tests, then strip everything that isn’t required to test it.

Common mistakes

1. Calling everything a Must. If your Must list has 12 items, you’re not scoping. You’re listing.

2. Polishing the wrong things. Polish the conversion path. Leave the admin tools rough.

3. No locked V1 scope. If V1 isn’t written on a wall before build starts, scope will creep. Lock it.

4. Building before defining the hypothesis. “What does V1 prove?” should have a one-sentence answer before any code gets written. If it doesn’t, you’re not building an MVP, you’re building a product hoping someone will buy it.

5. Treating the MVP as the eventual product. V1 tests the hypothesis. V2 is the product. Mixing the two means V1 is too ambitious and V2 never gets the focus it needs.

What ShipFit does at this stage

ShipFit Stage 5, What's V1? MVP scope sorted into Differentiator (keep), Delight (cut first), and Operational buckets, with ICE-style scores and effort tags per feature.

Stage 5 of the 9-step playbook is What’s V1?. The output is your MVP scope, sorted into three buckets:

BucketWhat goes hereCut rule
DIFFERENTIATORThe 3-5 features that make the buyer pick you over the incumbent. “Why you pick us over the other guy. This is your bet.”Keep all.
DELIGHTFeatures users will love but can’t afford yet. “Cut first.”Most of these go to V2.
OPERATIONALThe plumbing required to make the Differentiators work (auth, billing, basic admin).Keep the minimum.

Each feature carries an ICE-style score (0-10) plus an effort tag (S/M/L), so the cut decisions are visible instead of vibes. The “Cut first” instruction on Delight is deliberate: it’s the single most-resisted feedback ShipFit gives, and the single most-validated way to ship in 4-8 weeks instead of 6 months.

Under the hood:

  1. Your feature wishlist gets parsed and each feature dropped into one of the three buckets.
  2. MoSCoW and ICE lenses applied to each feature against the V1 hypothesis you wrote.
  3. Output is three MVP packages — Lean, Balanced, Full — so you can see the scope/effort tradeoff explicitly before you pick.
  4. Deferred items (Delights, Shoulds, Coulds) land in a V2 file with effort tags, so the cut features are parked instead of lost.

The output drops into your dev environment via Q9 exports: Universal Prompt for any AI chat, or tool-specific configs for Cursor + Claude Code + Windsurf + v0 + Lovable + Replit + Gemini. So the scope discipline survives the handoff to engineering.

The bottom line

MVP scoping isn’t about being minimal for minimal’s sake. It’s about being scoped enough to test one hypothesis, fast enough to learn before runway runs out, and disciplined enough to cut everything that doesn’t serve that test. Most products fail because their V1 was actually V1.3. Don’t build that one.

Related frameworks

Frequently asked questions

What is an MVP?
The Minimum Viable Product, a term coined by Frank Robinson (2001) and popularized by Eric Ries in The Lean Startup (2011), is the smallest version of your product that tests the one core hypothesis your business depends on. NOT a feature-light version of the eventual product. NOT a 'beta' with rough edges. The MVP is a scoped experiment: it has one job (proving or disproving the hypothesis) and everything that doesn't serve that job is cut. Most founders ship products that are too big to be an MVP and too rough to be a real launch. Pick one.
How do I decide what features make it into V1?
Apply MoSCoW (Must / Should / Could / Won't) honestly. For each feature, ask: 'if we ship without this, does the hypothesis still get tested?' If yes, it's not a Must. The trap most founders fall into is calling everything a Must because cutting hurts. Then they ship 6 months late. A scoped V1 typically has 3-5 Must features and ships in 4-8 weeks. If your Must list has 12+ items, you're not scoping, you're listing.
What's the impact-effort matrix and how does it apply to MVP scope?
A 2x2 with impact on one axis (how much does this feature move the hypothesis test) and effort on the other (how many engineer-days to build). Top-right (high impact, low effort) goes into V1. Top-left (high impact, high effort) goes into V2 unless the hypothesis can't be tested without it. Bottom-right (low impact, low effort) is technical debt waiting to happen. Skip. Bottom-left (low impact, high effort) is the trap most teams fall into. Cut ruthlessly. ICE scoring is the numerical version of this matrix. See the [ICE scoring framework](/frameworks/ice-scoring).
How long should an MVP take to build?
4-8 weeks for a solo or small founding team. If your MVP requires 4+ months of build, you've scoped V1.3, not V1. Real MVPs are embarrassingly small. The Dropbox 'MVP' was a 3-minute video. Buffer's was a landing page with a pricing table and 'sign up to learn more' button. The point isn't to be minimal for minimal's sake. It's to test the hypothesis as cheaply as possible. If your MVP would take 6 months to build, the hypothesis is too complex; pick a smaller hypothesis.
Should the MVP be polished or rough?
Polished where the buyer's first impression forms (landing page, signup flow, the one core action). Rough where the buyer doesn't see (admin tools, edge cases, integrations they'd use later). The mistake most founders make is the opposite: they polish the admin dashboard nobody sees and ship a confusing signup flow that kills conversion. Polish the conversion path. Leave the rest rough until V2.
What's the difference between an MVP and a prototype?
A prototype tests a design or a technical question (does this UI work? does this integration even connect?). An MVP tests a market question (will real buyers pay for this?). Prototypes don't need to be shipped. MVPs need real users and ideally real money exchange. The Lean Startup confused these for a decade by calling everything an MVP. They're different tools for different jobs.
How do I know when MVP scope creep is happening?
Three warning signs. (1) Your Must list grows during the build. The list should only shrink, never grow. New ideas during build go to V2, not V1. (2) Sprint reviews include 'we also did X' that wasn't planned. (3) Your launch date keeps moving back by 1-2 weeks every 2 weeks. The fix is brutal: lock V1 scope on day 1 of the build, write it on a wall, and refuse to add anything until V1 ships. The next great idea goes in the V2 file.
Related on ShipFit

Keep exploring

Framework
MoSCoW

MoSCoW prioritization scopes V1 by sorting features into Must, Should, Could, and Won't. The honest version cuts ruthlessly and ships in weeks, not months.

Framework
The Mom Test

The Mom Test is Rob Fitzpatrick's framework for customer interviews that generate real signal. Not praise. Three rules, applied step-by-step, with examples.

Spoke
Competitive Analysis

Most early-stage competitive analysis is a 2x2 with your product in the top-right quadrant. The real version is harder, more boring, and tells you whether you can actually win.

Spoke
Pricing Validation

Most founders pick a price by looking at competitors and shaving 20%. That's not pricing strategy, it's matching. Real pricing validation produces a price you can defend against your own ego and your buyer's pushback.

Calculator
CAC / LTV ratio calculator

Does each customer make you money? Or cost you money?

Q&A
How do you validate a business idea?

Run nine framework-backed decisions in order before writing code: define the buyer, prove the pain is painful, name the winning angle, scope V1 to the smallest test of the hypothesis, get behavioral evidence (paid pre-orders, signed letters of intent, or credit cards on file from a Fake Door Test), then ship. Most failed startups skipped at least three of those nine. Plan to spend two to four weeks on this. It saves six to nine months of building the wrong thing.

For founders
indie hackers

For indie hackers who've wasted months on dead ideas. ShipFit forces 9 decisions before you write a line of code. Proven frameworks, exports to Cursor.

Comparison
Buildpad

If you want a conversation partner, Buildpad. If you want to stop researching and ship, ShipFit. Both solve different problems for different founders. Don't pick on hype.

Glossary
MVP (Minimum Viable Product)

The smallest version of a product that lets you test a falsifiable hypothesis about a buyer's behavior. Coined by Frank Robinson in 2001; popularized by Eric Ries in 'The Lean Startup' (2011). Not a stripped-down launch product. A learning tool.

Ready to make your next product a success?

9 decisions between your idea and a product worth building.

No credit card required.

Try an example: