What is Card Sorting? Complete Beginner's Guide (2026)
Complete beginner's guide to card sorting for UX research. Learn what card sorting is, when to use it, how it works, and why it's essential for user-centered design.
What is Card Sorting? Complete Beginner's Guide
Card sorting is a UX research method where you ask people to organize content into groups that make sense to them. That's it. You give someone a pile of items — features, pages, topics — and they sort them into categories based on how they naturally think about those things.
Why bother? Because the way your team organizes information is almost never the way your users think about it. You group things by how they were built or which team owns them. Your users group things by what they're trying to do. Card sorting closes that gap. It's one of the fastest, cheapest ways to build navigation and content structures that actually work for the people using them.
A typical study takes 2-3 days to run with 20-30 participants, costs somewhere between nothing and $30 with online tools, and gives you a clear picture of how your users' mental models work. You can run one this week.
Card Sorting in 60 Seconds
What it is: Users organize cards (representing content, features, or pages) into groups that make sense to them.
Why it matters: Shows you how users think, not how you think.
When to use it: Designing navigation, organizing content, or structuring information.
How long it takes: 2-3 days to run a study, 1 day to analyze.
Cost: Free to $30 (using online tools + small incentives).
The Card Sorting Metaphor
Card sorting works like organizing a bookstore based on customer behavior rather than internal logic.
Your way (insider perspective):
- Fiction by publisher
- Non-fiction by publication date
- Technical books by ISBN
Customer's way (user perspective):
- Fiction by genre (mystery, romance, sci-fi)
- Non-fiction by topic (cooking, business, history)
- Quick finds (bestsellers, new arrivals, staff picks)
Card sorting reveals the customer's way.
How Card Sorting Works
The Old Way (Physical Cards)
Before digital tools, card sorting meant writing each item on an index card, sitting in a room with one participant at a time, and tallying results by hand. It worked, but the logistics kept sample sizes tiny — usually 5-10 people — and analysis took weeks.
The process was straightforward:
- Write each item on an index card
- Give participants a stack
- Ask them to group cards into piles
- Have them name each pile
- Record everything manually
- Hunt for patterns across participants
The biggest problem wasn't the method. It was the overhead. Setting up was slow, you needed everyone in the same room, and analyzing the data meant spreadsheets and manual counting. Most teams just didn't bother.
The Modern Way (Digital/Online)
Online tools changed everything. Now you create digital cards, send a link, and get back analyzed results from dozens of participants — often within days.
Today's approach:
- Create digital "cards" in an online tool
- Send a link to 20-40 participants
- They drag and drop to organize
- Software analyzes results automatically
- You get patterns, agreement scores, and visualizations
Setup takes about 5 minutes. Participants can be anywhere in the world. And the analysis that used to take days happens instantly.
Types of Card Sorting
There are three flavors of card sorting, and which one you pick depends on where you are in the design process.
Open Card Sort
You give participants cards with no predefined categories. They create their own groups and name them whatever makes sense to them.
This is what you want when you're starting from scratch — when you don't yet know what the right categories are and you want unbiased input.
Example:
You give participants:
├─ 30 feature cards (no categories)
They create:
├─ "My Stuff" (5 cards)
├─ "Shopping" (8 cards)
├─ "Help & Support" (4 cards)
└─ "Settings" (3 cards)
What you get: Natural groupings plus category names in your users' own language.
Closed Card Sort
You provide the category names up front. Participants sort cards into your predefined buckets.
Use this when you already have a navigation structure and want to test whether it makes sense to users. It's also useful for comparing two competing structures head-to-head.
Example:
You give participants:
├─ Account
├─ Shop
├─ Support
└─ Company
They sort 30 cards into these 4 categories
What you get: How well your categories match user expectations, plus agreement scores showing where people struggled.
Hybrid Card Sort
You suggest categories but let participants modify them or create new ones. This gives you the best of both worlds — validation of your existing ideas plus room for surprises.
Example:
You suggest:
├─ Account
├─ Shop
└─ Support
Users can:
├─ Use your categories
├─ Rename them
└─ Create new ones like "My Orders"
What you get: Confirmation of what's working, plus new ideas you hadn't considered.
When to Use Card Sorting
Card sorting shines whenever you need to organize a lot of things and you're not sure how users would expect to find them.
Perfect Use Cases
Website navigation redesign. If users can't find what they need, or your menu has grown into a mess of top-level items, card sorting shows you how people actually expect things to be grouped. An e-commerce site with 50 product categories, for example, might discover through card sorting that users want to browse by occasion (work, casual, athletic) — not just product type.
Mobile app structure. When screen space is limited, you can't afford to get your groupings wrong. A banking app with 30 features might learn that users think in terms of "Move Money" and "See My Money" instead of "Transactions" and "Accounts."
Content-heavy sites. Help centers, documentation portals, knowledge bases — anywhere you have hundreds of articles that need a sensible taxonomy. Card sorting often reveals that users want task-based categories ("Getting Started," "Troubleshooting") rather than feature-based ones.
E-commerce product categorization. Designing browse and filter systems that match how people actually shop. A furniture store might discover customers think by room AND by style, leading to dual-axis navigation.
When NOT to Use Card Sorting
Testing visual design. Card sorting is about organization, not how things look. Use A/B testing or preference studies for that.
Testing workflows. Card sorting shows grouping, not step-by-step flows. Use task analysis or journey mapping instead.
Getting feedback on concepts. Card sorting assumes items are already defined. Use concept testing or interviews for early-stage ideas.
Testing actual findability. Card sorting tells you what belongs together, but not whether people can find it in a live navigation. That's what tree testing is for — and it's the natural follow-up to a card sort.
Fewer than 15 items. With too few items, the patterns aren't meaningful. Just talk to users directly.
The Science Behind Card Sorting
Card sorting works because it taps into something people do naturally: categorize. You don't need to teach someone how to group related things together. They've been doing it since childhood — fruits go here, vegetables go there, toys in this box, books on that shelf.
Mental Models Drive Everything
Cognitive psychologist Eleanor Rosch showed that people organize the world into mental models — internal maps of how things relate to each other. When your navigation matches those mental models, people find what they need quickly. When it doesn't, they get lost.
Card sorting is one of the most direct ways to access those mental models. Instead of asking people what they think (which is unreliable), you watch them organize — and the patterns tell you how they actually think.
Groups Beat Experts
Here's the counterintuitive part: 20-30 regular users will produce a better information architecture than your best designer working alone. Individual experts bring too many assumptions. A group of users, on the other hand, surfaces shared patterns that no single person would arrive at.
Tullis & Wood's research found that around 15 participants reveal the vast majority of grouping patterns, and going beyond 30 shows rapidly diminishing returns. The sweet spot for most studies is 20-30 participants.
Step-by-Step: Your First Card Sort
Here's the full process. You can go from nothing to a launched study in about 30 minutes, and have results within a week.
Step 1: Define Your Goal (5 minutes)
Before you create a single card, get clear on what you're trying to learn. Ask yourself three questions:
- What am I trying to organize?
- What decision will this inform?
- Who are my users?
Example goal: "Understand how users would group our 30 product features for the app navigation redesign."
Step 2: Create Your Cards (15 minutes)
Pick 20-40 items that represent the content or features you need to organize. Use the actual labels users will see in the final product — not internal names or jargon.
Example cards (SaaS product):
- Dashboard
- Analytics Reports
- Team Chat
- File Sharing
- Task Board
- Calendar View
- Time Tracking
- Notifications
- User Permissions
- Integrations
... (20 more)
Keep names short (2-5 words), specific, and distinct from each other. "Resources" is too vague. "Video Tutorials" is just right.
Step 3: Choose Your Study Type (2 minutes)
If this is your first time studying this content, go with an open card sort. It gives you unbiased results without your assumptions baked in.
Already have a structure you want to test? Use a closed sort. Want a mix? Hybrid.
Step 4: Write Instructions (5 minutes)
Keep them short and neutral. Don't accidentally nudge people toward any particular grouping.
Template:
Welcome! Thank you for helping us improve [Product].
Please organize these features into groups that make sense to you.
Create category names that describe each group.
This should take 10-12 minutes. There are no right or wrong answers!
Thank you!
Step 5: Set Up Your Study (5 minutes)
Pick a tool that fits your budget:
- CardSort — free, easy setup, great for beginners. Create free study →
- Optimal Workshop — comprehensive, enterprise features, pricier
- UserZoom — enterprise-focused
Step 6: Recruit Participants (1-2 days)
This is the step most people rush through, and it's the one that matters most. You need actual target users — not your coworkers, not your friends.
Aim for 20-30 participants for an open sort, 30-40 for a closed sort. Recruit from your customer email list if you can (most representative). Research panels like UserTesting and Respondent work well too. Small gift cards ($5-10) help a lot with response rates.
Step 7: Launch and Monitor (same day)
Don't send to everyone at once. Start with 5 participants. Check their responses for confusion — if card names aren't clear or instructions are ambiguous, you'll see it right away. Fix any issues, then send to the rest.
Check in daily. Send a reminder after 3 days to anyone who hasn't responded.
Step 8: Analyze Results (a few hours)
Your tool will do the heavy lifting here. Focus on:
- Cards grouped together 70%+ of the time — these are strong relationships you can build on
- Most common category names — this is your users' vocabulary
- Surprising groupings — these challenge your assumptions (which is the whole point)
- Cards with under 40% agreement — these might need clearer labels or don't naturally fit anywhere
Most tools will also generate a similarity matrix (a heatmap of relationships) and dendrograms (tree diagrams showing hierarchical groupings).
Step 9: Turn Results into Design
Take your high-agreement groupings and use them as the foundation for your navigation.
Example:
Card sort results:
├─ "My Work" (Dashboard, Tasks, Calendar) - 85% agreement
├─ "Team Stuff" (Chat, Files, @Mentions) - 78% agreement
├─ "Reports" (Analytics, Time Tracking, Export) - 72% agreement
└─ "Settings" (Permissions, Integrations, Account) - 81% agreement
Becomes app navigation:
├─ Work
├─ Team
├─ Insights
└─ Settings
Notice how the final labels don't always match the user labels exactly — "Team Stuff" becomes "Team" and "Reports" becomes "Insights." You're using the groupings from your data but refining the labels for clarity and brevity.
Real Example: From Card Sort to Design
Here's how a typical card sort plays out in practice.
Before the Card Sort
A product team had been organizing their app navigation around internal terminology:
├─ Features
├─ Data Management
├─ User Administration
├─ Configuration
└─ Tools & Utilities
The problem was obvious from support tickets — users kept struggling to find basic things. The labels reflected how the engineering team thought about the system, not how customers thought about their work.
What the Card Sort Revealed
They ran an open sort with 30 participants sorting 35 features. Within three days, clear patterns emerged. Users overwhelmingly grouped things by what they were trying to accomplish, not by system function:
├─ "My Projects" / "Work Space"
│ └─ Project list, Tasks, Files
├─ "Team" / "People"
│ └─ Members, Chat, Activity
├─ "Reports" / "Analytics"
│ └─ Dashboards, Data Export, Charts
└─ "Settings" / "Account"
└─ Profile, Permissions, Billing
What They Built
The final navigation drew directly from the card sort patterns:
├─ Projects (most common user label)
├─ Team
├─ Insights (more approachable than "Reports")
└─ Settings
After the redesign, the team saw significantly less navigation confusion in user testing, noticeably faster task completion, and much higher feature adoption — especially for features that had been buried under vague labels like "Tools & Utilities."
Common Beginner Mistakes
Nearly every first card sort has at least one of these problems. They're easy to avoid once you know what to watch for.
Mistake #1: Testing with Your Coworkers
What goes wrong: You run the sort with 10 colleagues from different departments.
What to do instead: Recruit 25 actual users from your customer base.
Your team already knows how the product is built. Their mental models reflect internal knowledge, not user thinking. The whole point of card sorting is to escape your own assumptions.
Mistake #2: Too Many Cards
What goes wrong: You include 80 cards, creating a 45-minute study.
What to do instead: Keep it to 30-40 cards with a 10-15 minute completion time.
After about 20 minutes, participants get fatigued and start making careless groupings. You'll end up with unreliable data that looked like it should be useful but isn't.
Mistake #3: Vague or Technical Card Names
Bad: "Resources," "Platform," "More Options," "Data Management"
Good: "Video Tutorials," "Dashboard," "Account Settings," "Export Data"
If users don't understand what a card represents, they'll sort it randomly. Your results will look noisy, and it won't be a participant problem — it'll be a card problem.
Mistake #4: Expecting Perfect Agreement
Wrong reaction: "Only 65% agreed on this grouping — the study failed!"
Right reaction: "65% is a strong primary pattern. The remaining 35% tells us there's some flexibility in how people think about these items."
You'll rarely see 100% agreement on anything. That's normal. Look for strong majority patterns, not unanimity.
Mistake #5: Skipping Validation
Wrong approach: "Card sort says this is the structure — let's build it!"
Right approach: "Card sort suggests this grouping. Let's run a tree test to make sure people can actually find things in this structure."
Card sorting tells you what belongs together. Tree testing tells you whether people can navigate the result. They're a natural pair — run them in sequence.
Card Sorting Tools
CardSort (Recommended for Beginners)
The simplest way to get started. Free plan includes 3 studies with up to 50 responses each, no credit card required. Setup takes about 5 minutes, participants don't need to create an account, and the analysis is automatic.
Best for first-time card sorters, small teams, freelancers, and anyone who doesn't want to spend money figuring out if card sorting is useful for them (spoiler: it is).
Optimal Workshop
The industry standard for large-scale research. Comprehensive analytics, advanced statistical features, multiple research methods in one platform. The downside is cost ($149-$449/month) and a steeper learning curve.
Best for UX research teams at larger organizations or agencies running studies regularly.
DIY (Google Forms + Spreadsheets)
Technically free, but you lose all the benefits that make card sorting fast — automated analysis, participant-friendly interface, similarity matrices. You'll spend hours on manual data entry and analysis that a proper tool handles in seconds.
Only worth considering if your budget is literally zero and your time has no value. (It does.)
Frequently Asked Questions
How is card sorting different from surveys for navigation design?
Surveys ask people what they think they want. Card sorting watches what they actually do. That's a critical difference. When someone fills out a survey about navigation, they're guessing at their own behavior. When they sort cards, they're demonstrating it. The results from card sorting tend to predict real navigation behavior much more reliably than survey responses about navigation preferences.
What sample size delivers reliable card sorting results?
Tullis & Wood's research shows that 15 participants surface the vast majority of grouping patterns. For most practical purposes, 20-30 participants give you enough confidence to make design decisions. Closed sorts need a bit more — aim for 30-40, since the more constrained choices require a larger sample to see clear patterns. Under 15, your results may shift substantially as you add more people.
Can card sorting replace usability testing for navigation design?
No, and it shouldn't. Card sorting tells you how people group things. It doesn't tell you whether they can find things in a live navigation structure. Tree testing fills that gap — it measures whether users can locate specific items in a proposed hierarchy. Run a card sort first to figure out the groupings, then a tree test to validate that the structure actually works.
How long should participants take to complete a card sort study?
Aim for 8-15 minutes with 30-40 cards. If people are finishing in under 5 minutes, you probably have too few cards or they're rushing. If they're taking more than 20 minutes, fatigue starts degrading the quality of their responses. Keep it in that sweet spot and you'll get reliable, thoughtful groupings.
What agreement percentage indicates successful card sorting results?
Anything above 70% is a strong signal — that's a shared mental model you can confidently build around. Between 60-69% is still a useful primary pattern worth considering. Below 40% usually means the card label is unclear or the concept genuinely doesn't fit neatly into any single category. Those cards might need renaming, splitting, or dropping.
Next Steps
Learn More
Want to dive deeper?
- Card Sorting Examples - 15 real-world case studies
- Card Sorting for IA - Complete IA guide
- Remote Card Sorting - Best practices for online studies
- Card Sort Templates - Ready-to-use templates
Run Your First Study
Ready to try it?
- Create free account (2 minutes)
- Add your cards (5 minutes)
- Send to participants (1 minute)
- Get results (2-3 days)
No credit card required. No risk. Just a clear picture of how your users actually think.
Related Resources
Getting Started
- How to Run Your First Card Sort Study
- Get Started with Card Sorting for the First Time
- Free Card Sorting Tool: Complete Guide
- How to Create a Card Sorting Template
Methodology & Best Practices
- Choose Between Open and Closed Card Sorting
- How Many Participants for Card Sorting?
- Remote Card Sorting Best Practices
- How to Recruit Participants
- Card Sorting with Your Team
- Using Prolific for UX Research
- Conduct Card Sorting on a Low Budget
Analysis & Results
- How to Analyze Card Sorting Results
- Interpret Card Sorting Results and Find Patterns
- How to Interpret Dendrograms
- Card Sorting Examples: 15 Real-World Case Studies
Information Architecture
- Card Sorting for Information Architecture
- How to Organize Website Navigation
- How to Create an Effective Product Taxonomy
- How to Design a Mega Menu
- Improve Website Findability with Card Sorting
Industry Guides
- Card Sorting for SaaS Products
- Card Sorting for Healthcare Websites
- Card Sorting for Government Websites
- Card Sorting for Fintech Apps
- Card Sorting for E-Commerce Catalogs
- Card Sorting for Content Strategists
- Card Sorting for Product Managers
- Card Sorting for Developers
For Students & Educators
- Card Sorting for Student UX Assignments
- Present Card Sorting Results as a Student
- Teach Card Sorting in a UX Course
- Run a Card Sorting Workshop for a Design Class
- Write a UX Research Plan for a Capstone Project
- Best Free UX Research Tools for Students
Comparisons
- Best Card Sorting Software 2026
- Card Sorting vs Tree Testing
- Open vs Closed Card Sorting
- Moderated vs Unmoderated Card Sorting
- Free vs Paid Card Sorting Tools
- Card Sorting Sample Size Guide
- CardSort vs Optimal Workshop
- CardSort vs Maze