Assumption Mapping: How to Identify & Test What You Don’t Know

clock Mar 19,2026
Assumption Mapping: How to Identify & Test What You Don't Know

Every product decision your team makes rests on something unproven. The feature you prioritized last sprint, the audience you chose to build for, the pricing model you settled on during that Monday meeting. All of it sits on top of beliefs that nobody has verified. Some of those beliefs will turn out to be correct. A concerning number of them will not, and you will find out too late, after the code has shipped and the budget has been spent.

Clayton Christensen at Harvard Business School put the new product failure rate at 95%. Academic researchers Castellion and Markham examined that claim more carefully in 2013 and found that empirical studies with business practitioners across industries placed failure rates closer to 40%. The exact number depends on the sector and how you define failure. But 40% of newly launched products failing to meet performance or market expectations is still a staggering amount of wasted effort. CB Insights, analyzing startup post-mortems, found that 42% of startups failed because there was no market need for what they built. Not because they ran out of money. Because nobody wanted the thing.

The pattern underneath these numbers is consistent. Teams build on assumptions they never surfaced, never questioned, and never tested. Assumption mapping exists to fix that specific problem.

What Assumption Mapping Actually Is

David J. Bland, founder of Precoil and co-author of Testing Business Ideas with Alexander Osterwalder, developed the methodology while working alongside Jeff Gothelf and Josh Seiden, co-authors of Lean UX. The exercise is a structured way for teams to make their hidden beliefs visible, then sort those beliefs by how much evidence supports them and how much rides on them being correct.

The question at the center of the exercise is simple: What are all the things that need to be true for this idea to work?

Answers fall into 4 categories of risk. Desirability asks if the market wants the idea at all. Feasibility asks if the team can build and deliver it at scale. Viability asks if the economics hold up. Adaptability asks if the idea can survive as conditions change around it.

Google adopted Bland’s methodology directly into their Design Sprint process, where behind every new product hides what they call leap-of-faith assumptions. If proven false, these assumptions can make or break an initiative. The method has since been adopted by federal government security teams and organizations of all sizes. It works because it forces a team to confront what it does not know before committing resources.

Running the Exercise Step by Step

Desirability Assumptions

According to Google’s Design Sprint Kit documentation, your team writes desirability assumptions on sticky notes by answering guided questions. Who are the target customers? What problem do they want to solve? How do they solve it now? Why can’t they solve it now? What outcome do they want? Why would they stop using their current solution?

Viability Assumptions

Viability follows the same structure but asks a different set of questions. What are the main acquisition channels? How will customers repeatedly use the solution? Why will customers refer new users?

Feasibility Assumptions

Feasibility covers execution risk. What are the biggest technical or engineering challenges? What are the legal or regulatory risks? Where does funding come from? Why is the team uniquely positioned to win?

Prioritizing the Map

Once every assumption is written down, the team draws 2 axes on a wall or whiteboard. The horizontal axis runs from known to unknown. The vertical axis runs from unimportant to important. Every sticky note goes on the map.

Bland emphasizes that the shared conversation during mapping is more valuable than the map itself. The top-right quadrant, where assumptions are both important and unknown, becomes the focus for near-term experimentation. From those assumptions, the team designs evaluative experiments.

One of Bland’s strongest recommendations addresses a common failure mode. By starting with “Build,” teams fall into a trap where they are building to build, instead of building to learn. He recommends asking a single question before anything else: “What do you want to learn?”

From Assumptions to Experiments

Once you have identified the riskiest assumptions, the next step is choosing the right test for each one. The lean testing toolkit provides several well-documented options, each suited to different types of risk.

Smoke Tests

A smoke test describes the product’s value proposition on a landing page and asks visitors to sign up before the product exists. Buffer used this approach, pitching their product on a simple page to measure interest before writing a single line of code.

Concierge Tests

A concierge test replaces an automated technical solution with humans who interact directly with the customer. The purpose is validating demand before building anything. Airbnb started this way. The founders had a design conference coming to their city, opened their loft as cheap accommodation, took photos, put them on a basic website, and soon had 3 paying guests.

Wizard of Oz Tests

A Wizard of Oz test hides the human effort from the user. The customer perceives what looks like a working automated system, but behind the scenes, people are doing the work manually. This lets teams validate the product concept without building the underlying technology.

A/B Tests, Fake-Door Tests, and Prototype Studies

A/B tests, fake-door tests, and prototype-based usability studies round out the toolkit. Teams can also use AI prototyping to validate ideas with real or simulated users.

The principle that connects all of these methods: the best experiment requires the least effort for the maximum amount of learning.

Connecting to Continuous Discovery

Assumption mapping connects directly to the continuous discovery framework that Teresa Torres, founder of Product Talk, has articulated. Torres developed the Opportunity Solution Tree, a visual tool that connects business goals to customer needs, opportunities, solutions, and experiments.

The working unit in continuous discovery is the product trio: a product manager, a designer, and an engineer, all empowered to solve a problem together. This trio collaborates from the start to assess desirability, viability, feasibility, and usability risks for any idea. Assumption tests determine if solutions meet the conditions for success, and teams test the riskiest assumptions to validate before investing in delivery.

The tree structure enforces discipline. The outcome at the top sets the scope for everything below it, helping teams understand which opportunities are relevant and keeping them focused. One practitioner described the effect on stakeholder relationships: “At all times the tree is irreplaceable as a way to visually communicate all the discovery and thinking that goes into development and decision making. Stakeholders that used to throw curveballs into our sprints can now truly grasp the level of thinking and testing that has gone against our opportunities.”

The weekly cadence of talking to at least 1 customer also matters. Teams that adopted this habit reported it pushed them to find more effective ways to reach users and helped them develop a real understanding of user problems over time.

Why Speed Determines Who Wins in 2026

Product School’s analysis of current trends captures the competitive dynamic plainly: “In 2026, your learning speed is your moat.” When AI allows anyone to clone product workflows and messaging in weeks, the advantage belongs to the organization that notices what is changing, updates its beliefs, and ships a different answer the fastest.

Miro’s CEO reinforced this at ProductCon: “The number 1 competitive mode that every company has is the speed of learning. How fast you recognize a signal, separate that from the noise, and act on that signal.”

Teams at Slack have adopted this approach by running small cross-functional squads who use AI to prototype constantly, learn quickly, and discard dead ends without hesitation. Roadmaps are treated as hypotheses. Experiments are cheap and constant. It is normal for a team to kill a project after 2 weeks of strong negative signal instead of dragging it through 3 quarters.

AI-accelerated prototyping has compressed validation cycles, with features going from concept to production in hours. But speed creates its own risk. Product School’s research found that 56% of executives believe AI is embedded across the product lifecycle, compared with only 18% of managers. Overestimating maturity leads to scaling too early and underinvesting in governance. Speed without disciplined assumption testing accelerates failure rather than preventing it.

Only 31% of product leaders feel confident they are building the right product for their market. That gap is what assumption mapping was built to close.

The Sprint-Research Timing Problem

Product teams run on 2-week sprint cycles. Traditional research does not fit inside them. Each round of interviews, surveys, and usability tests adds weeks or months while teams wait for participant recruitment, scheduling, and analysis.

The costs add up fast. Moderated usability research runs $10,000 to $20,000 per phase. External recruiting firms charge $100 to $300 per qualified participant plus project management fees. A moderated study with 20 participants costs between $12,000 and $15,000 for recruitment and honorariums alone. Specialized qualitative work with niche audiences can exceed $40,000 for 10 to 15 interviews.

This creates a predictable tension. Teams know they should validate assumptions before building, but validation takes longer than the sprint allows. So they skip it, build on untested assumptions, and discover mistakes after deployment, when the cost of correction has multiplied. Research suggests teams spend up to 60% of their research time on data processing rather than strategic interpretation.

Closing the Gap with Evelance

This timing mismatch is the specific problem Evelance was built to address. Evelance provides instant access to over 1 million predictive audience models. Teams can target precise segments without outreach campaigns, scheduling conflicts, or participant incentives. Each model includes demographic data, professional background, technology comfort levels, and behavioral patterns.

Targeting goes well beyond generic demographic brackets. Where traditional tools offer age ranges and income brackets, Evelance enables targeting like “working mothers aged 28 to 42 who shop online for family essentials and prefer evening medication reminders.” Each model includes behavioral attribution covering personal context, environmental factors, and decision-making patterns, all calibrated against observed user patterns rather than demographic assumptions.

For assumption mapping, this means desirability assumptions can be tested within the same sprint cycle they are identified. Tests complete in minutes. No outreach, no time zone coordination, no participant management. Research stays on the same timeline as a 2-week sprint.

The approach is hybrid by design. Evelance augments existing research workflows. Teams run initial validation through predictive models, then focus live interviews on the specific issues that surface. This preserves the depth of human sessions while compressing validation cycles. Teams report finding 40% more insights when live sessions explore pre-validated designs rather than discovering fundamental problems from scratch.

Building the Habit

The highest-performing product teams treat assumption mapping as a weekly practice embedded into sprint planning, not a quarterly workshop. Every new feature idea and every proposed solution carries embedded assumptions.

When a solution is proposed, the product trio asks Bland’s question: What needs to be true for this to work? They sort answers into desirability, viability, feasibility, and adaptability. They plot them on the importance-versus-evidence matrix. The top-right quadrant items, high importance and low evidence, get tested before any code is written.

Then each assumption gets matched to the cheapest, fastest experiment that generates learning. Smoke tests for demand. Wizard of Oz prototypes for solution-value questions. A/B tests for optimization. Predictive validation through tools like Evelance for rapid desirability and usability screening within the sprint window.

Bland said it well: Assumptions Mapping is a small but important tool in the bigger toolset of creating a strategy, and it is essential to testing business ideas that help replace guesswork, intuition, and best practices with knowledge.

The alternative is building on untested assumptions and hoping for the best. That is exactly how products join the failure statistics. The ability to systematically identify what you do not know and test it before you build is what separates teams that consistently ship things people want from teams that spend months learning they built the wrong thing.