Guides
8 min read

How to Validate a New Feature Before Building It

Stop building features nobody uses. Validate new feature ideas with card sorts, surveys, and tree tests to prove demand and find the right UX before writing code.

ValidateThat Team

To validate a new feature before building it, you need to separate real user demand from internal assumptions by running lightweight research studies before any code gets written. Most product teams skip this step — they rely on gut instinct, stakeholder opinions, or a single customer request — and end up shipping features that fewer than 20% of users ever touch. Card sorts, surveys, and tree tests let you test feature desirability, priority, and discoverability in days, not weeks, giving you hard data to build the right thing or kill the wrong one early.

Key Takeaways

  • Time required: 3-5 days from setup to decision
  • Difficulty: Beginner to intermediate
  • What you need: A feature hypothesis, access to users or prospects, and a research tool
  • Key tip: Test desirability and findability separately — a wanted feature that users can't find is as useless as one nobody wants

What You'll Need

  • ValidateThat account (free at validatethat.io)
  • A written feature hypothesis (what it does, who it's for, what problem it solves)
  • Access to 15-30 current users or target users
  • Your product's current navigation structure or sitemap
  • A list of 15-25 feature concepts (including the one you're testing)

Step 1: Write a Feature Hypothesis You Can Actually Test

Before running any study, write your feature hypothesis in this format: "[User segment] needs [feature] because [problem], and they'll find it by [expected navigation path]."

This forces you to be specific about four things: who wants it, what it does, why they need it, and where it lives. Vague hypotheses like "users want a dashboard" can't be validated because there's no clear success or failure criteria.

Break your hypothesis into testable components: desirability (do users want this?), priority (how important is it versus other things?), and discoverability (can they find it in our product?).

Pro tip: If your hypothesis came from a single customer request, note that. Single-customer features are the most common source of wasted engineering time. Validation tells you whether it's a pattern or an outlier.

Step 2: Survey for Desirability and Problem Confirmation

Create a 5-question survey targeting the user segment from your hypothesis. Don't describe your feature — describe the problem it solves and ask users how they currently handle it.

Key questions to include: "How often do you encounter [problem]?" (frequency), "How do you currently solve [problem]?" (workarounds), "How satisfied are you with your current approach?" (severity on a 1-5 scale), and "If you could change one thing about [product area], what would it be?" (open-ended).

If fewer than 50% of respondents experience the problem at least weekly, or average satisfaction with current workarounds is above 3.5/5, the feature probably isn't worth building.

Pro tip: Include a question asking users to rank 4-5 potential improvements (including your feature idea, described as a benefit). This gives you relative priority data alongside absolute desirability.

Step 3: Run a Card Sort to Test Feature Grouping and Priority

Create an open card sort with 15-25 cards representing features — include your proposed feature alongside existing features, competitor features, and a few deliberate decoys. Ask participants to group them by importance or by how they'd organize a product in this space.

This reveals two things: where users naturally group your feature (which tells you where it belongs in your product), and how they prioritize it relative to everything else. If participants consistently put your feature in a "nice to have" or "don't need" group, that's a clear signal.

Run the card sort on ValidateThat and aim for 20+ participants. The similarity matrix will show you which features users mentally associate with each other — giving you insight into not just whether to build the feature, but what to bundle it with.

Pro tip: Pay attention to the labels participants create for their groups. If your feature ends up in a group labelled "advanced" or "power user," that tells you about the audience size. If it's in "essential" or "daily use," you've got broad demand.

Step 4: Tree Test for Discoverability

Take your product's current navigation structure and add the new feature where you plan to put it. Create a tree test with 4-5 tasks, including 2-3 that require finding the new feature.

For example: "You want to [accomplish the goal your feature enables]. Where would you go?" If participants can't find it (below 70% success rate) or take indirect paths to get there, your feature placement needs work — even if the feature itself is validated.

This is the step most teams skip, and it's why validated features still fail. A feature can be highly desired but completely invisible if it's buried in the wrong menu or labelled with jargon users don't recognize.

Pro tip: Run two versions of your tree test — one with the feature in your proposed location and one with it in the location suggested by your card sort results. Compare success rates to pick the winner.

Step 5: Synthesize Into a Build / Defer / Kill Decision

Compile your results across all three studies and score the feature:

  • Desirability (from survey): What percentage experience the problem? How severe is it?
  • Priority (from card sort): Where did users rank it relative to other features?
  • Discoverability (from tree test): Can users find it? What's the task success rate?

If all three are strong (>50% problem frequency, top-third priority, >70% findability), build it. If desirability is strong but priority or discoverability is weak, defer and redesign the approach. If desirability is weak, kill it regardless of everything else.

Pro tip: Present this scorecard to stakeholders instead of arguing about opinions. Data from 20-30 real users is more persuasive than any internal debate. It also protects you if a stakeholder pushes for a feature that users clearly don't want.

Step 6: Validate the MVP Scope With a Closed Card Sort

Once you've decided to build, run a closed card sort to define your minimum viable version. List all the sub-features and capabilities you could include, and create 3 predefined categories: "Must have for launch," "Nice to have," and "Can wait."

This prevents scope creep by letting users define the MVP instead of your team. If 80% of participants put a sub-feature in "can wait," cut it from v1 without guilt.

Pro tip: Include 2-3 sub-features you think are essential but that might be engineering-heavy. If users consistently put them in "nice to have," you've just saved your team weeks of development on v1.

Pro Tips

Run desirability and discoverability tests in parallel — they measure different things and don't interfere with each other, so you can collect data simultaneously

Test with non-users too — if you're building a feature to attract new users, your current users aren't the right validation audience. Recruit from your target market instead

Document the "no" decisions — a validated kill is just as valuable as a validated build. Keep a log of features you decided not to build and why, so the conversation doesn't restart every quarter

Revalidate after major product changes — a feature that wasn't a priority six months ago might be critical now if you've shipped other things that changed the landscape

Common Mistakes to Avoid

Asking users "would you use this feature?" — hypothetical questions produce hypothetical answers. Test the problem, not the solution concept

Only surveying power users — they'll validate everything because they want more capabilities. Include casual users and new users to get a realistic demand picture

Skipping the tree test — a feature that scores high on desirability but can't be found in your product is functionally nonexistent. Always test discoverability

Treating 5 responses as validation — one enthusiastic user in a survey of 5 looks like 20% demand. The same user in a survey of 30 is 3%. Get at least 15-20 responses before making decisions

Frequently Asked Questions

How long does feature validation take?

A complete validation cycle — survey, card sort, and tree test — takes 3-5 days from setup to decision. You can set up all three studies in an hour, distribute links, and have enough responses within 2-3 days. Analysis takes another hour or two. It's fast enough to fit into a single sprint.

Should I validate every feature?

No. Validate features that are expensive to build, risky (uncertain demand), or contentious (team disagrees on value). Bug fixes, performance improvements, and features with clear contractual requirements don't need validation. Use your judgment — the goal is to de-risk, not to bureaucratize.

What if stakeholders disagree with the validation results?

Share the raw data. Most stakeholder resistance comes from not seeing the evidence firsthand. Show them the survey responses, the card sort similarity matrix, and the tree test success rates. If they still want to override the data, document their decision so you can reference it later.

Can I validate features for a product that hasn't launched yet?

Absolutely. Use your proposed product structure for tree tests and feature concepts for card sorts. The methods work the same way — you're testing mental models and priorities, not existing product usage. This is actually the highest-value time to validate, since pre-launch changes are cheap.

Ready to Try It Yourself?

Start your card sorting study for free. Follow this guide step-by-step.