Most product teams treat feature requests like a voting booth. The features with the most votes get built first, and everything else waits. This sounds reasonable until you ship a highly requested feature and nobody cares, or you remove a small one nobody mentioned and your support inbox fills up overnight. The problem is that user satisfaction does not move in a straight line. Some features make people happy when they work well. Others only get noticed when they break. A few surprise people in ways they cannot articulate until they see them. Professor Noriaki Kano figured this out in the 1980s, and the classification system he built around that insight remains one of the most useful tools a product team can pick up. His model gives you a structured way to sort features by how users emotionally respond to them, which turns out to be far more actionable than sorting by how loudly people ask for them.
Where The Kano Model Came From
Noriaki Kano was studying a specific question: why did some product improvements build customer loyalty while others had almost no effect? He observed that customers responded to features with different emotional intensities depending on the type of feature, and he suspected there were 5 distinct response types. He ran a study with 900 participants to test this, and the results confirmed his hypothesis. The framework he published from that research has been applied in product development, service design, and software planning for over 4 decades.
The 5 Feature Categories
Each category in the Kano model maps to a different emotional reaction when a feature is present or absent.
Must-Be Requirements
These are the features customers assume will be there. When they work, nobody comments on them. When they fail or are missing, dissatisfaction is severe. Kano called them “Must-be’s” because they represent the baseline cost of competing in a given market. Think of login functionality in a SaaS product or seatbelts in a car. You do not get credit for including them, but you get punished for leaving them out.
Performance Attributes
Satisfaction increases when these are done well and decreases when they are done poorly. These are the features customers talk about and the ones companies tend to compete on directly. Page load speed in a web application fits here. Faster is better, slower is worse, and users can articulate this preference easily.
Attractive Attributes
These create satisfaction when present but do not cause dissatisfaction when absent. Users typically do not ask for them because they have not thought of them yet. When they appear, they produce a positive reaction that can build loyalty. These are often unspoken needs, which makes them hard to discover through traditional feedback channels.
Indifferent Attributes
Users do not care about these in either direction. Their presence or absence has no measurable effect on satisfaction. Building indifferent features is a pure waste of engineering time.
Reverse Features
These actively cause dissatisfaction for some users. A feature one segment finds useful might annoy or frustrate another segment. Identifying reverse features early prevents you from shipping something that damages the product for part of your user base.
How the Survey Works
The data collection method follows a specific format. For each feature being evaluated, users answer 2 questions. The first is functional, asking how they would feel if the feature were included. The second is dysfunctional, asking how they would feel if the feature were absent. Each question offers 5 response options: “I like it,” “I expect it,” “I’m neutral,” “I can tolerate it,” and “I dislike it.”
The combination of answers to both questions determines which of the 5 categories a feature falls into. When a response pair does not make logical sense, it gets flagged as questionable and is typically excluded from analysis.
Survey Sizing
Practitioners generally recommend evaluating 15 to 20 features per survey. This captures enough variety without exhausting respondents. For sample sizes, MeasuringU recommends between 50 and 300 participants for a margin of error between 5% and 9%. If you are working with a fairly uniform user segment, 20 to 30 respondents can identify roughly 90% of all possible product requirements according to practitioner guidance.
Better-Worse Coefficients
Categorical classification tells you what type a feature is. The satisfaction coefficients, developed by Berger et al. in 1993, tell you how strongly it pulls in each direction.
The Better coefficient measures the probability that satisfaction increases when a feature is present. The formula takes the sum of Performance and Attractive percentages, divided by the total of all 4 main categories (Must-Be, Performance, Attractive, and Indifferent).
The Worse coefficient measures the probability that dissatisfaction increases when a feature is absent. It takes the sum of Must-Be and Performance percentages, divided by the same total, then multiplied by negative 1.
A feature with a high Better score and low Worse score is a delighter. A feature with a low Better score and high Worse score is a must-be. When both scores are high, you are looking at a performance attribute. These numbers give you a way to compare features on a continuous scale rather than relying on categorical labels alone.
Features Drift Over Time
An attractive feature does not stay attractive forever. Daniel Zacarias, a product industry practitioner, calls this “the natural decay of delight.” Over time, features that once surprised users become expected. This happens as competitors adopt similar capabilities or as users grow accustomed to having them.
2-factor authentication started as something that impressed users. It then moved into performance territory, where doing it well mattered. In many industries now, it is a basic requirement that users assume will be present. Dark mode followed a similar path. Free WiFi in hotel rooms was once a pleasant extra. After the COVID-19 pandemic accelerated remote work, it became a baseline expectation.
This drift means a single Kano survey gives you a snapshot, not a permanent map. Teams should be reassessing their classifications periodically, comparing results across segments like new users and power users, and combining survey data with behavioral metrics to check if stated delight actually translates into retention and adoption.
The Same Feature, Different Users
A feature can land in 1 category for 1 segment and a completely different category for another. This is one of the most underused aspects of Kano analysis.
In B2B products, running the analysis by role often reveals that security features are must-be requirements for administrators while analytics dashboards are attractive features. The economic buyer might classify reporting capabilities as performance attributes while the end user finds them indifferent.
A documented case study from a large publisher illustrates this well. During a recipe site redesign, healthy ingredient alternatives were attractive to younger female users but registered as indifferent among older users. Without segmenting the data, the team would have averaged those responses into a misleading composite result.
Case Study: Recipe Site Replatforming
A large publisher applied the Kano model to guide the replatforming of a high-traffic recipe site with millions of users. Previous redesigns in related parts of the business had failed because the teams removed features based on assumptions rather than evidence.
The team evaluated existing and proposed features, including recipe saving, hands-free step-through videos, healthy ingredient alternatives, user comments, achievement badges, and pantry-based filtering. Standard Kano paired questions were distributed through internal and social channels, with demographic data collected for segmentation.
The results directly shaped the rebuild scope. Must-have features like recipe saving and bookmarking were protected to prevent backlash from loyal users. Indifferent and reverse features were excluded, keeping the rebuild fast and focused. A small set of validated delighters was added. Demographic insights informed how certain features were positioned for different user segments. The outcome was a more positive audience reaction than prior redesign attempts in other business units at the same company.
Kano in Agile Sprint Planning
The model fits into sprint-level prioritization by giving teams a way to categorize backlog items by their satisfaction impact. Must-be items go into near-term sprints because shipping without them creates immediate dissatisfaction. Performance features get prioritized next because they are where competitive differentiation happens. Delighters are layered in when the basics are covered.
The Kano model does not replace other prioritization frameworks. Scoring methods like ICE cover feasibility and urgency. The Kano model adds a satisfaction dimension that those frameworks miss. Using them together produces more grounded roadmap decisions.
Methodological Advances Worth Knowing
A 2025 study published in ScienceDirect proposed a fuzzy Kano approach for mobile application features. Traditional Kano analysis can lose information by considering only the most frequent classification for each feature. The fuzzy approach accounts for the subjectivity in human judgments and resolves situations where a feature could reasonably be classified in 2 categories. The study found meaningful differences between fuzzy and traditional results, with variations across demographic groups.
Separately, a 2025 paper in ScienceDirect introduced a Dual Response method to address a persistent problem: too many features getting classified as indifferent. The method identifies features with latent potential by analyzing response patterns more closely, applying thresholds (greater than 6% response-adjusted importance and greater than 50% “rather wanted” shares) to surface attributes that traditional analysis would have dismissed.
A 2025 systematic review in the Journal of Hospitality and Tourism Research confirmed that while the original Kano framework remains the most widely applied version, newer methods have been developed that address its limitations. The review noted that most practitioners still use the original method even when better options are available.
Machine Learning and Automated Classification
Research from 2024 and 2025 has demonstrated that machine learning models can automate parts of the Kano classification process. A 2025 study published in Information Fusion combined BERTopic with an Attention-BiLSTM model to extract product attributes from user-generated content and predict demand trends. A 2024 study published in MDPI trained 7 deep learning models on over 3,000 annotated online comments, finding that a Recurrent Convolutional Neural Network performed best at classifying user needs into the 4 main Kano categories.
A 2025 paper presented at the 19th International Conference on Business Excellence described how real-time sentiment analysis and predictive modeling can reduce dependence on static surveys by enabling continuous, data-driven reassessment of feature classifications.
Where Evelance Fits Into Kano Validation
The operational bottleneck in running Kano analysis is the research cycle itself. Recruiting participants, scheduling sessions, distributing surveys, and analyzing results typically takes 3 to 6 weeks per round. For teams working in 2-week sprints, that timeline creates a gap between when a classification question arises and when an answer arrives.
Evelance compresses that validation cycle by using predictive audience models to simulate how specific user segments would respond to feature concepts. Teams upload a design file or live URL, select a target audience from over 2 million predictive models covering consumer and professional profiles, and receive results in minutes. Those results include 12 psychology scores measuring user response patterns, prioritized recommendations, and individual persona-level feedback.
For Kano analysis specifically, this means teams can run preliminary classification tests against targeted segments, check if a feature reads as a must-be to enterprise users but a delighter to small business users, and iterate on their survey design before committing to full-scale distribution. The predictive models do not replace live research, but they remove the time barrier that has historically prevented teams from running Kano as frequently as their market conditions demand.
When to Stop Classifying and Start Building
The Kano model gives you a sorting mechanism, not a permanent answer. Feature classifications change as markets mature, competitors ship new capabilities, and users adjust their expectations. The teams that get the most from this framework are the ones who treat it as a recurring input to their planning process rather than a 1-time exercise. Run the analysis, build from the results, and run it again when conditions change. The model is most useful when it stays current.

Mar 08,2026