Guides
9 min read

How to Validate a Product Redesign With Real Users

De-risk your product redesign by validating it with real users first. Use card sorts, tree tests, and surveys to prove your new design works before shipping.

ValidateThat Team

To validate a product redesign with real users, run comparative research studies — card sorts, tree tests, and surveys — that measure whether your new design actually performs better than what you have today. Most redesigns are justified by aesthetic preferences or stakeholder opinions rather than evidence. The result is predictable: you ship a beautiful new interface that confuses existing users and doesn't convert new ones any better. Validation gives you the data to prove the redesign improves usability before you commit engineering resources to building it.

Key Takeaways

  • Time required: 1-2 weeks for a full comparative validation
  • Difficulty: Intermediate
  • What you need: Your current product structure, the proposed new structure, and access to both existing and new users
  • Key tip: Always benchmark the current design first — you can't prove improvement without a baseline

What You'll Need

  • ValidateThat account (free at validatethat.io)
  • Your current product's navigation structure documented as a tree hierarchy
  • Your proposed redesign's navigation structure documented the same way
  • Access to 15-25 existing users and 15-25 prospective (new) users
  • Your current usability metrics (task completion rates, support tickets, NPS)
  • Screenshots or descriptions of the current vs proposed designs

Step 1: Benchmark Your Current Design

Before testing anything new, establish a baseline. Create a tree test using your current product's navigation structure with 6-8 tasks that represent core user workflows. Run it with 15-20 participants (mix of existing users and new users).

Record the task success rate, directness (did users go straight to the right place or backtrack?), and time to complete for each task. These numbers become your baseline — every proposed change needs to beat them or it's not an improvement, it's just different.

Also send a short survey to existing users: "What's hardest to find in [product]?", "What would you change about the navigation?", and "Which features do you use most often?" This surfaces the actual pain points your redesign should solve.

Pro tip: Separate your baseline results by user type. Existing users may navigate the current design well through muscle memory, while new users struggle. Your redesign needs to improve things for new users without breaking experienced users' workflows.

Step 2: Run an Open Card Sort to Discover Natural Mental Models

Before you design the new structure, discover how users actually think about your product's features. Create an open card sort with 20-30 cards representing all major features, pages, and actions in your product.

Run this with a mix of existing users and people who match your target audience but haven't used the product. Let them group and label the cards freely. The resulting similarity matrix shows you which features users naturally associate with each other — and crucially, where your current structure disagrees with user expectations.

Look for high-agreement clusters (features that 70%+ of participants group together) and common labels. These become the foundation of your new information architecture, grounded in user mental models rather than internal team structure or legacy decisions.

Pro tip: Run separate card sorts for existing users and new users. If they produce very different groupings, you'll need to design a structure that works for both — or create progressive disclosure that evolves as users become more experienced.

Step 3: Draft Your Redesigned Structure From Card Sort Data

Use the card sort similarity matrix and common labels to draft your new navigation hierarchy. Start with the high-agreement clusters as your top-level categories, then nest features within them based on participant groupings.

Compare this data-driven structure against what your design team proposed. Where they align, you have strong confidence. Where they differ, trust the user data — it represents how real people think, not how your team wishes they'd think.

Keep the structure to 3-4 levels deep maximum. Every additional level of nesting reduces discoverability. If your card sort produced deeply nested groupings, flatten them by combining related groups or using progressive disclosure patterns.

Pro tip: Name your navigation items using the exact labels participants created in the card sort. User-generated labels outperform internal jargon or clever marketing terms every time.

Step 4: Tree Test the New Structure Against the Baseline

Create a tree test using your proposed new structure with the same 6-8 tasks you used in Step 1. Run it with a fresh set of 15-20 participants (don't reuse baseline participants — they've already seen the tasks).

Compare results directly against your baseline: task success rate, directness, and time. For your redesign to be validated, it should show meaningful improvement on at least the tasks that performed worst in the baseline. A redesign that improves some tasks but makes others worse is a lateral move, not an upgrade.

If specific tasks regress, investigate where participants went wrong in the new structure and adjust before committing to the redesign.

Pro tip: Run the new tree test with both existing users and new users separately. The redesign must work better for new users (that's usually the whole point) without significantly degrading the experience for existing users who know the old structure.

Step 5: Preference Test Key Design Changes With Surveys

For visual and interaction design changes (not just structure), create a survey showing side-by-side comparisons of the current and proposed designs. For each comparison, ask: "Which version is clearer?", "Which would you prefer to use daily?", and "Why?"

Focus on the 3-4 most significant visual changes — not every pixel-level difference. Include at least one comparison where you expect the new design to win convincingly. If it doesn't win even on your strongest change, the redesign direction may need rethinking.

The open-ended "why" responses are where the real insight lives. Users might prefer the new design's look but find the old design more functional, or vice versa. These nuances help you cherry-pick the best elements from both versions.

Pro tip: Include a "no preference" option. If users consistently can't tell the difference or don't care about a specific change, that change isn't worth the engineering effort. Focus your redesign budget on the changes users actually notice and value.

Step 6: Run a Migration Risk Assessment

Before shipping, assess the risk of confusing existing users. Create a survey for current active users that shows the proposed new navigation and asks: "Can you find where [feature they use regularly] would live in this new layout?" for 3-4 of their most-used features.

If more than 30% of existing users can't locate their regular features in the new structure, you need a migration plan: persistent "feature moved" indicators, a guided tour of changes, or a gradual rollout with opt-out.

This step prevents the most common redesign failure — shipping a better product that your existing users revolt against because everything moved.

Pro tip: Segment this assessment by user tenure and engagement level. Power users who log in daily will be most disrupted by changes. If they can't find their features, plan a dedicated communication and transition strategy for this segment.

Step 7: Build Your Validation Report and Get Buy-In

Compile your findings into a single document with clear before/after metrics:

  • Tree test task success: current vs proposed (per task)
  • Card sort alignment: how closely does the new structure match user mental models?
  • Preference test: which design version users chose and why
  • Migration risk: what percentage of existing users can navigate the new structure

This report serves two purposes: it gives stakeholders confidence to approve the redesign investment, and it gives your team specific metrics to maintain during implementation. If the redesigned product launches and tree test scores drop below your validated benchmarks, something went wrong in translation from design to code.

Pro tip: Include a "changes we decided not to make" section. This shows stakeholders that validation isn't just about confirming the redesign — it's about identifying which parts of the current design actually work well and should be preserved.

Pro Tips

Test structure before visual design — information architecture problems cause more user confusion than visual design problems. Validate the tree before you worry about the theme

Use fresh participants for each test — reusing participants from your baseline creates learning effects that inflate your redesign scores. Always recruit separate cohorts

Validate incrementally if possible — instead of testing a complete redesign, validate individual structural changes separately. This tells you exactly which change drives which improvement

Keep baseline data forever — every future redesign and iteration should be compared against this benchmark. It becomes your product's usability history

Common Mistakes to Avoid

Skipping the baseline — without benchmarking the current design, you can't prove the redesign is better. "Users like the new design" means nothing without "...more than the old one"

Testing only with your design team — designers are the worst test participants for their own redesign. They know the intent behind every decision. Test with people who are encountering the design cold

Redesigning everything at once — big-bang redesigns are impossible to validate because you can't attribute improvements or regressions to specific changes. Phase your redesign and validate each phase

Ignoring existing user migration — a redesign that's better for new users but confuses existing users isn't a net win. Always validate both audiences separately

Frequently Asked Questions

How many users do I need to validate a product redesign?

Plan for 15-20 participants per tree test (baseline and proposed), 15-25 per card sort, and 20-30 survey respondents. In total, you'll need access to roughly 50-80 users across all studies. Since tree tests and card sorts are async, you don't need to schedule any calls — participants complete them on their own time.

Should I validate a redesign with existing users or new users?

Both, separately. Existing users tell you about migration risk and workflow disruption. New users tell you about first-impression clarity and discoverability. A successful redesign improves scores for new users without significantly degrading scores for existing users.

How do I validate a redesign when stakeholders have already decided to ship it?

Frame your research as "optimizing the redesign" rather than "deciding whether to redesign." Run tree tests to identify which parts of the new structure need adjustment and surveys to prioritize which visual changes matter most. Even if the redesign decision is made, validation ensures it ships in the best possible form.

What if the validation shows the current design is better?

This is a valuable outcome, not a failure. Present the data and recommend targeted improvements to the current design instead of a full redesign. You've just saved your team months of work and your users the frustration of relearning a product that was already working. Document the findings so the "let's redesign" conversation doesn't restart in six months.

Ready to Try It Yourself?

Start your card sorting study for free. Follow this guide step-by-step.