10 Tips for Customer Interviews for 2026

clock Jan 10,2026
Tips for Customer Interviews

A good customer interview feels like a conversation where the other person does most of the talking. You sit across from someone, ask a question, and then watch them work through their own thoughts out loud. The best sessions leave you with answers you did not expect, problems you had not considered, and a sense that you finally understand why someone behaves the way they do. The worst sessions feel like a checklist, a series of questions that produce polite but empty responses.

Most teams know they should talk to customers. Fewer teams know how to do it well. The difference between a useful interview and a wasted hour often comes down to preparation, pacing, and knowing when to stay quiet.

TL;DR

  • Keep interview sessions between 30 and 60 minutes to balance depth with participant stamina
  • Match sample size to research type: 10 to 12 for foundational, 6 to 8 for formative, 15 or more for summative
  • Write fewer than 10 questions and keep your discussion guide under 5 pages
  • Make every question open-ended to invite explanation rather than yes-no responses
  • Aim to speak 15% of the time and listen 85% of the time during each session
  • Limit yourself to 3 moderated sessions per day to maintain quality
  • Schedule sessions no more than 3 weeks in advance to reduce no-shows
  • Use hybrid approaches with Evelance to fit research into 2-week sprint cycles
  • Take notes that capture specific behaviors rather than stated opinions
  • Debrief immediately after each session while your memory is fresh

1. Keep Sessions Between 30 and 60 Minutes

Interview length matters more than most researchers admit. Sessions that run too short leave you with surface-level responses. Sessions that run too long exhaust participants and dilute the quality of later answers.

For most product research, aim for 30 to 45 minutes per session. This window gives you enough time to ask open-ended questions and follow up on interesting threads without wearing down your participant. Discovery interviews, where you are exploring a new problem space, tend to work best in the 40 to 60 minute range. Continuous research interviews, which happen more frequently with the same participant pool, should stay closer to 20 to 30 minutes.

The 60-minute mark functions as a hard ceiling for most contexts. Past that point, participant fatigue sets in. Responses become shorter, less thoughtful, and more likely to default to what the participant thinks you want to hear. If you consistently need more than an hour, consider splitting your research into multiple sessions or narrowing your question set.

2. Choose Your Sample Size Based on Research Type

Sample size recommendations vary widely depending on who you ask. Some researchers cite Jakob Nielsen’s finding that 5 users can reveal roughly 85% of usability issues. Others point to analysis of over 2,000 PhD theses, which found interview sample sizes ranging from 1 to 95, with a median of 31. The Nielsen Norman Group notes there is no consensus on the right number.

Rather than searching for a universal answer, match your sample size to your research goals. Foundational research, where you are mapping out a new problem space or understanding a new user segment, typically requires 10 to 12 participants. Formative research, where you are iterating on a concept or design, works well with 6 to 8 participants. Summative research, where you are validating a finished product or feature, often needs 15 or more participants to produce reliable findings.

Guest, Bunce, and Johnson found that most qualitative themes emerge within 6 to 12 interviews. If you are conducting exploratory research and still hearing new information after 12 sessions, continue recruiting. If responses start repeating themselves after 6 sessions, you likely have enough data to move forward.

3. Write Less Than 10 Questions

A common mistake in interview preparation is writing too many questions. Long question lists create rigid sessions where you rush through topics rather than exploring them. They also signal to participants that you are more interested in completing your agenda than hearing their actual thoughts.

Keep your question list under 10 items. This constraint forces you to prioritize what matters most and leaves room for follow-up questions that emerge during the conversation. A discussion guide for a standard 45 to 60 minute interview should fit on 4 to 5 pages, including prompts and notes for yourself.

Start with broad, open-ended questions before moving to specific ones. Nielsen Norman Group recommends this sequencing because it allows participants to share their perspective before you introduce your assumptions. Beginning with specifics can anchor the conversation around your framing rather than theirs.

4. Make Every Question Open-Ended

The phrasing of your questions determines the quality of your answers. Closed questions produce closed responses. If you ask “Do you like this feature?” you will get a yes or a no. If you ask “Tell me about a time you used this feature,” you will get a story with context, emotion, and detail.

Avoid leading questions that suggest a preferred answer. “Don’t you think this design is easier to use?” pushes the participant toward agreement. “How would you describe your reaction to this design?” leaves space for honest feedback.

Yes-no questions have their place in surveys, but interviews are about depth. Every question should invite the participant to explain, describe, or walk you through something. The goal is to understand their reasoning, not to count their opinions.

5. Talk 15% of the Time, Listen 85%

The best customer interviews follow a consistent pattern: the researcher speaks for about 15% of the session and listens for the remaining 85%. Some practitioners frame this as 90% listening and 10% talking. The exact ratio matters less than the principle. Your participant should do most of the talking.

This ratio is harder to maintain than it sounds. Silence feels uncomfortable, and the instinct to fill it with another question or a validating comment is strong. Resist that instinct. When you stop talking, participants often continue elaborating. Their most useful insights frequently arrive in these unprompted extensions.

Speak calmly and slowly. Participants tend to match the pace and tone of the researcher. If you rush through questions, they will rush through answers. If you remain patient and unhurried, they will take more time to think before responding.

6. Limit Yourself to 3 Sessions Per Day

Moderated research is mentally demanding. Each session requires full attention, active listening, and real-time decision-making about which threads to pursue. Running too many sessions in a single day compromises your performance and reduces the quality of your data.

Cap your schedule at 3 sessions per day. Four or more leads to fatigue, missed follow-up opportunities, and a higher likelihood of rework later when you realize your notes are incomplete or your recordings captured low-energy conversations.

Build buffer time between sessions. You need space to write down observations while they are fresh, reset mentally, and prepare for the next participant. Back-to-back scheduling eliminates that space and leaves you scrambling.

7. Schedule Sessions No More Than 3 Weeks Out

Recruitment timing affects show rates. When you schedule sessions too far in advance, participants forget about them, their circumstances change, or they lose interest. The longer the gap between recruitment and the actual session, the higher your no-show risk.

Keep scheduling windows within 3 weeks of the session date. This timeframe balances the practical need for advance planning with the reality that participant commitment decays over time. Send reminder messages at least twice: once a few days before and once the morning of the session.

For ongoing research programs, consider building a participant panel you can activate on shorter notice. Regular participants who know the process tend to show up more reliably than first-time recruits.

8. Use a Hybrid Approach to Fit Sprint Timelines

Product teams typically work in 2-week sprints. Traditional research projects take 3 to 4 weeks from recruitment through reporting. This timing mismatch creates a familiar problem: design decisions either wait for insights that arrive too late, or teams proceed without validation and risk building features users reject.

A hybrid methodology addresses this gap by combining predictive validation with live interviews. Teams run initial validation through predictive models, then focus live interviews on the specific issues that surface. This approach preserves the depth of direct conversation while compressing validation cycles to fit sprint schedules.

Hybrid workflows allow teams to screen concepts before booking participants, test iterations between scheduled sessions, and validate fixes while waiting for recruitment. Research teams report finding 40% more insights when live sessions explore pre-validated designs rather than discovering fundamental problems for the first time.

9. Take Notes That Capture Behavior, Not Opinion

During interviews, distinguish between what participants say they do and what they actually do. People are unreliable narrators of their own behavior. They forget details, overestimate their consistency, and sometimes describe aspirational versions of themselves rather than accurate ones.

When taking notes, prioritize behavioral observations over stated opinions. “I always check reviews before buying” is less useful than “Last week I bought running shoes without reading any reviews because the price was low enough.” The second statement describes a specific action in a specific context. The first describes an identity claim that may or may not hold up under examination.

Ask participants to walk you through recent examples rather than hypothetical scenarios. “Tell me about the last time you used this product” produces more reliable data than “How would you use this product?” Real events have real details. Hypothetical events have guesses.

10. Debrief Immediately After Each Session

The hour after an interview is when your memory is sharpest and your impressions are most accurate. Use this window to write down observations, flag surprising moments, and note questions that arose during the conversation.

Waiting until the end of the day or the end of the week to debrief introduces distortion. You will remember the most recent sessions more clearly than earlier ones. You will unconsciously blend details across participants. You will lose the emotional texture of specific moments that seemed important at the time.

If you are running sessions with a colleague, schedule a brief debrief conversation immediately after each participant leaves. Compare notes, discuss interpretations, and identify areas of agreement and disagreement. These conversations often surface insights that neither researcher noticed individually.

Build debrief time into your session schedule. If your interview slots are 45 minutes, block 60 minutes on your calendar. The extra 15 minutes for notes and reflection will pay off when you begin analysis.

Bonus: How Evelance Supports Modern Interview Research

Evelance augments existing research workflows rather than replacing them. The platform handles volume testing and rapid validation while research teams run deep-dive sessions on pre-validated designs. This division of labor lets researchers spend their limited interview time on questions that require human judgment and conversation.

The practical benefits compound across a research program. Teams can validate multiple design directions quickly, identify the most promising concepts, and then allocate interview slots to exploring those concepts in depth. Participants spend their time reacting to designs that have already passed initial screening rather than providing feedback on obviously flawed concepts.

For teams running continuous research, Evelance helps maintain momentum between scheduled sessions. Insights from predictive validation inform the next round of interview questions, creating a feedback loop that accelerates learning without increasing session volume.