In psychology, availability bias is when we assume something is common just because it’s memorable.
You hear about a plane crash and suddenly flying feels risky - even though it’s statistically the safest way to travel.
Nothing changed about reality - your brain just latched onto something vivid.
This shows up all the time in customer interviews.
You speak to a user who tells a compelling story about a painful experience with your product. It’s detailed. It’s emotional. It sticks in your mind. And suddenly, you’re prioritizing it like it’s a systemic issue.
But is it?
Where availability bias skews product priorities
This is where things get expensive. You might:
Shipping a major feature overhaul to fix a problem that only 4 users have
Ignoring a small UI tweak that’s confusing hundreds of people a little bit every day
You feel like you’re solving something important, but you’re actually just solving something memorable.
Loud ≠ common. And intensity of pain isn’t the same as breadth of impact.
What matters is how many users it affects, and which users.
How to check yourself
Before jumping to solutions, ask:
Is this representative? Who am I actually speaking to?
Is it confirmed elsewhere? Can I find this issue in analytics or session data?
Am I leading them? Are my questions loaded with assumptions or fishing for a specific answer?
Because if your inputs are biased, your roadmap will be too.
Who should you be listening to?
Customer interviews aren’t useless - they just need context. Here’s how to avoid the bias and make them useful:
1. Start by defining your cohort
Not all users are equal. A power user with high LTV or expansion potential should absolutely get more weight than a free-tier lurker. Before you even run an interview, know:
What plan they’re on
How active they are
Their role and team size
Their potential to expand or churn
You’re not just validating problems - you’re validating them for the right user type.
2. Don’t mistake a good story for good data
Ask yourself:
Is this the first time I’ve heard this issue?
Are they describing something measurable (a drop-off, bounce, hesitation)?
Are they generalizing (“everyone on my team struggles”) or just speaking personally?
Record your interviews, transcribe them, tag specific quotes - and always cross-check them with behavior.
3. Use session data to verify it
Let’s say a user complains about a confusing part of onboarding.
With Autoplay, you can:
Pull up that user’s session
Tag the friction point
Check how many other users hit the same blocker
Slice it by cohort, is this just new users? Just self-serve accounts? Just EU customers?
Now you’re not just taking someone’s word for it - you’re seeing how widespread the problem really is.
Qual + quant: how to make decisions that scale
To know if feedback is valid (and worth prioritizing), run it through both lenses:
Qualitative
User interviews
In-app feedback forms
Support tickets and chat transcripts
Quantitative
Session replays
Drop-off analysis
Click maps and scroll depth
Funnel conversion
Tag frequency in Autoplay
Example:
3 power users complain about the export feature - all high LTV, all churn risk.
Only 5% of all users use that feature, but 80% of enterprise accounts do. That’s a fix worth prioritizing.
Don’t let one user set the roadmap
Interviews are a signal, not a decision.
Use them to generate hypotheses, not conclusions. Always verify with behavior, and always ask: how many users is this really affecting?
And which ones?
Availability bias makes you think you’re being user-centric, when really, you’re just being reactive.
Final thought
Not every loud complaint is a sign of a big problem.
And not every small annoyance is insignificant.
Listen to your users. But don’t let one vivid story speak for all of them.