What Is an MVP — Starting with Common Misconceptions
As the term "MVP (Minimum Viable Product)" has spread, its meaning has become distorted. The most common misconception is that it refers to a cheaper, feature-reduced version of a product. Teams often use it to mean "ship with just the bare minimum" or "a pre-release before the real launch," but this interpretation misses the original intent entirely.
Eric Ries, who popularized the concept, defined an MVP as "the version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort." The key is that "learning" — not "minimum" — is the central purpose. The design criterion is not the number of features but whether the product can validate a specific hypothesis.
Understanding this definition precisely, an MVP is fundamentally about deciding what not to build. You set a single market hypothesis, then implement only what is strictly necessary to test it. Any feature unrelated to that hypothesis is noise that slows down the learning process.
Consider a team trying to validate the hypothesis: "Small business accountants will pay to reduce the time they spend on monthly closing." A rough prototype with only automated report generation is sufficient. Chat features, multilingual support, mobile apps, and invoice management integrations are not needed to test this particular hypothesis.
In a contract development context, clients often interpret MVP as a technique for building things cheaply and quickly. Contractors who understand the correct definition and take an active role in helping clients articulate their hypotheses significantly improve the overall probability of project success.
The Feature-Stuffing Trap — Why Teams Pile on Functionality
In new product development, scope has a tendency to expand without limit. As requirements are defined, voices arise saying "shouldn't we add that feature," "a competitor has this," or "users are surely expecting that too." Scope creeps forward continuously — but why does it seem impossible to stop?
The first driver is anxiety about uncertainty. Teams want to avoid the post-launch scenario where "if only we'd built that feature, users would have stayed." Adding features feels like taking out insurance, and that can seem rational. In reality, however, only a small fraction of added features ever see meaningful use.
The second driver is the difficulty of managing stakeholder expectations. Executives, sales teams, developers, and prospective customers each champion features they consider essential. Trying to satisfy everyone individually produces a product that is aimed at everyone and resonates with no one.
The third driver is development inertia. Once a feature is defined in a specification document, it stays on the build list unless someone actively removes it. The attitude of "we're building it anyway, might as well add this" accumulates, and the original purpose disappears from view.
The cost of a feature-stuffed approach is not limited to development expenses. Every additional week until launch is a week without market feedback. Risks compound: competitors may move first, teams may discover too late that built features misalign with market needs, and team motivation can erode across a prolonged build cycle.
More critically, investment in wrong assumptions is amplified. If the hypothesis "this service will be accepted by users" turns out to be incorrect, the more features that were built, the greater the loss. Slowing down learning means delaying the discovery of mistakes.
Forming Hypotheses — Decide What You Want to Learn First
Designing an MVP correctly requires clarity about what you are trying to validate before anything else. Most projects start from product feature specifications, but the correct order is to work backwards from hypotheses.
Business hypotheses are easiest to structure in three layers.
Problem hypothesis: "A specific user segment has a specific problem." For example: "Freelance designers spend more than three hours per week managing feedback exchanges with clients, and find that process painful." This is a hypothesis about the existence and severity of the problem, not about the solution.
Solution hypothesis: "This specific feature will solve that problem." This is formulated only after the problem hypothesis has been validated. In the example above: "If feedback could be anchored directly to images as comments, back-and-forth exchanges would be cut in half."
Business hypothesis: "Users will pay for that solution." Even if the problem and solution are both validated, there is no business without monetization. The form is: "Freelancers handling more than five feedback projects per month will continue paying 3,000 yen per month."
Among these three layers, the most uncertain one should be validated first. If the problem hypothesis collapses, all remaining hypotheses become meaningless. Many projects rush into implementing solution hypotheses and end up with "we built great features but nobody uses them" — this is the structural cause.
Once hypotheses are written, ask: "What is the minimum required to test this?" Before considering whether a full product is needed, explore whether the hypothesis can be validated with interviews, prototypes, landing pages, or paper mockups. In practice, many hypotheses can be validated without writing a single line of software.
Practical Scope Decision-Making
Once hypotheses are defined, the next task is setting scope. Decisions about what to include and what to exclude need to be made based on criteria, not on emotion or organizational politics.
The most practical decision criterion is: "Does this feature directly affect the validation of the core hypothesis?" If yes, it stays in scope. If no, it moves to a later phase. Applying this question to every candidate feature provides a mechanical way to narrow scope.
Next, ask: "Is there an alternative?" A feature that requires a management interface might be replaceable in the early stages with a spreadsheet or a manual operation. Something being "not automated" is not a defect in an MVP — it is a deliberate choice to reduce development cost and accelerate learning.
Scope decisions require stakeholder agreement. The most effective tool here is a visual priority map. List all candidate features and place them on a two-axis map: "impact on core hypothesis validation" against "development cost." Start with high-impact, low-cost items and explicitly label low-impact items as "Phase 2 and beyond."
Creating this map allows the question "why isn't this feature included?" to be answered with grounded reasoning rather than subjective judgment. Saying "it doesn't affect core hypothesis validation, so it is deferred to Phase 2" rather than "we don't need it" is what earns stakeholder alignment.
In the process of agreeing on scope between client and contractor, this framework is equally effective. Repeatedly asking "is this feature necessary for Phase 1 hypothesis validation?" throughout requirements definition reduces late-stage change requests and improves project predictability.
From MVP to the Next Phase — Reading Validation Results
After launching the MVP, the most important work is measurement design. Before launch, you need to define: "What outcome would tell us the hypothesis is correct?" Without this definition, confirmation bias kicks in and teams tend to see favorable data as proof that things are working.
Success criteria should be set as concrete numbers. Not "users engaged with it" but "40% or more of users logged in three or more times within 30 days." Agreeing on this criterion with stakeholders before launch ensures that evaluation remains objective.
Post-validation decisions fall into three broad patterns.
Persevere: The hypothesis is largely correct and metrics exceed targets. Continue in the same direction and advance to the next phase.
Pivot: Part of the hypothesis was wrong and the direction needs correction. Identify specifically which layer needs changing — the target customer, the problem definition, the value proposition — and enter the next validation cycle with a refined hypothesis.
Kill: The foundational hypothesis has collapsed and continued investment is untenable. Killing a product is not a failure; it is a learning outcome. Discovering the mistake early frees resources to concentrate on better opportunities.
In practice, the kill decision is the hardest. The psychological effect of sunk costs — having already invested in development — delays the decision to stop. However, establishing success criteria before the MVP launches makes it significantly easier to treat this decision logically rather than emotionally.
An MVP is not a destination; it is the starting point of a learning cycle. The essential value of MVP thinking is not to arrive at a finished product in a single attempt, but to run the cycle of "form hypothesis → validate with minimum effort → learn → form the next hypothesis" as quickly as possible. Building everything in is an attempt to complete this cycle in a single round, and that is precisely where the trap lies.
References
- Eric Ries, The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses, Crown Business, 2011.
- Steve Blank, "Why the Lean Start-Up Changes Everything," Harvard Business Review, May 2013. https://hbr.org/2013/05/why-the-lean-start-up-changes-everything
- Japan Small and Medium Enterprise Corporation, "Venture and Startup Support." https://www.smrj.go.jp/venture/index.html