UX Research Term

Sample Size

Sample Size

Sample size is the number of participants in a card sorting study, and it directly determines whether your results reflect real user patterns or random noise. The right number depends on the type of sort you're running, the diversity of your audience, and how much ambiguity you can tolerate in your analysis.

Key Takeaways

  • Open card sorts need 15-30 participants; closed sorts work with 10-15
  • Diminishing returns hit hard after 30 participants — the similarity matrix barely changes
  • 15 well-recruited participants beat 50 random ones every time
  • Academic ideals (20-30) are a starting point, not a rule

Why the Number Varies by Sort Type

Open card sorts ask participants to create their own categories, which introduces more variability. Every person brings a slightly different mental model, so you need more data points before stable clusters emerge. Research by Tullis and Wood (2004) found that 20-30 participants captured about 90% of the category structure in open sorts. At 15, you'll see the dominant groupings clearly. By 30, the remaining gaps are usually edge cases.

Closed card sorts constrain the problem. Categories are predefined, so participants only decide which card goes where. That fixed structure reduces noise, and 10-15 participants typically produce reliable agreement rates. Hybrid sorts sit in between — plan for 15-20.

The Diminishing Returns Problem

Here's what actually happens as you add participants to an open card sort:

  • At 10 participants: You can see rough clusters forming, but a single outlier can shift groupings significantly
  • At 15 participants: The top 3-5 clusters are clearly visible and stable
  • At 20 participants: Secondary clusters solidify, and the similarity matrix looks nearly complete
  • At 30 participants: The matrix barely changes from participant 25 to 30

Going from 15 to 30 participants might refine a few borderline card placements, but it won't redraw your category structure. Going from 30 to 60 is almost certainly wasted budget. That money is better spent on a follow-up tree test to validate the structure you found.

Quality Over Quantity

A common mistake: recruiting a large sample from a convenience panel without screening for relevance. If you're redesigning a B2B analytics platform, 15 product managers who use competing tools will generate cleaner data than 50 general-population respondents who've never touched a dashboard.

Participant quality factors that matter more than raw count:

  • Domain familiarity: Do they understand the content on the cards?
  • Motivation: Are they engaged or speed-clicking for incentive pay?
  • Audience match: Do they represent your actual users, not just available bodies?

For an internal redesign of a SaaS help center, we once ran an open sort with 18 support agents and 14 customers. The support agents produced tighter clusters because they dealt with the content daily. The customer group had more variability but surfaced naming preferences the agents missed. Both samples were small. Both were useful. Neither would have improved much at double the size.

Practical Guidelines

Sort TypeMinimumRecommendedDiminishing Returns
Open1520-30After 30
Closed1015-20After 20
Hybrid1515-25After 25

If you're running a remote, unmoderated sort and expect some percentage of low-quality responses (participants who finish in under 2 minutes or put everything in one category), over-recruit by 20-30%. Aim for 20 to net 15 clean responses.

Further Reading

Frequently Asked Questions

How many participants do you need for a card sort? Open card sorts need 15-30 participants for stable category clusters. Closed card sorts can produce reliable results with 10-15 participants because the categories are predefined, reducing variability. Hybrid sorts fall somewhere in between, typically requiring 15-20 participants. Beyond 30 participants, most studies see diminishing returns where the similarity matrix barely changes.

Is a larger sample size always better for card sorting? No. Diminishing returns hit hard after about 30 participants. The top clusters in your similarity matrix are typically visible by 15 participants and stabilize by 30. Recruiting beyond that adds cost and time without meaningfully changing your results. Fifteen well-recruited participants who match your target audience will produce better data than 50 random ones.

What is the minimum sample size for a card sort study? The practical minimum is 10 participants for a closed card sort and 15 for an open card sort. Below these thresholds, individual outliers can distort your similarity matrix and make clusters unreliable. Academic literature generally cites 20-30 as ideal, but for internal redesigns with well-recruited participants, 15 is often sufficient.

Try it in practice

Start a card sorting study and see how it works

Browse More UX Terms

Explore more terms in the UX research glossary

Related UX Research Resources

Explore related concepts, comparisons, and guides