How To
14 min read

Card Sorting for Information Architecture: Complete Guide (2026)

Learn how to use card sorting to build better information architecture. Step-by-step guide with templates, best practices, and real IA examples.

CardSort TeamUpdated

Card Sorting for Information Architecture: Complete Guide

Card sorting gives you raw data about how people group things together. But raw data alone doesn't tell you what to do next. This guide walks you through the full process of turning card sort results into a real information architecture — from picking the right cards to test, through analyzing your data, to building and validating a structure that actually works for your users.

What is Information Architecture?

Information architecture (IA) is how you organize and label the stuff in your digital product — pages, features, navigation, content. It's the underlying structure that determines whether someone can find what they're looking for or gives up and leaves.

When your IA mirrors how your team thinks about the product internally, you end up with navigation that makes perfect sense to you and nobody else. That's the core problem. Your users don't care about your org chart or your engineering team's module structure. They care about getting things done.

Good IA lets people find information quickly, understand where they are, and predict where things live without having to think too hard. Card sorting is one of the best tools for getting there because it shows you how your actual users — not your designers, not your stakeholders — naturally organize information.

Why Card Sorting for IA?

Most IA gets built on gut feeling. A designer sketches out a site map based on their own mental model, stakeholders rearrange it to match the org chart, and you end up with navigation that nobody tested with real people.

Card sorting cuts through that. When you have 20-30 users sort your content into groups, clear patterns emerge. You get a similarity matrix showing exactly which items people put together, and agreement scores that tell you how strong those groupings are. Items that 70%+ of participants group together? That's a strong signal you should pay attention to.

The whole thing takes days, not weeks. And unlike a lengthy usability study, the output is straightforward: you can see which groupings are solid, which ones are shaky, and where people's mental models diverge from what you expected.

The IA + Card Sorting Process

Phase 1: Prepare (1-2 days)

Step 1: Audit Your Content

Start by cataloging everything that needs a home — pages, features, support articles, account functions, all of it. A mid-size e-commerce site might have hundreds of pages across products, categories, support docs, and account features. You need to know the full scope before you can pick representative items to test.

Step 2: Select Representative Cards

You're not going to test every single page. Aim for 30-50 cards that represent the range of your content. Pick items from high-traffic pages, core features, known problem areas (check your analytics), and a spread of content types.

If you've got 400+ pages, boil that down to around 35 cards covering product samples from each category, key features like cart and order tracking, your most-visited support content, and core account functions.

Step 3: Write Clear Card Names

This is where a lot of studies go sideways. Your card labels need to be specific and jargon-free.

"Resources" could mean anything. "Video Tutorials" is clear. "Solutions" is vague corporate-speak. "Pricing Plans" tells the participant exactly what it is. Keep labels to 2-5 words, make them consistent in style, and avoid overlap between cards.

If a participant has to guess what a card means, they'll sort it randomly — and that random noise pollutes your data.

Phase 2: Run Card Sort (3-5 days)

Step 1: Choose Study Type

You have three options:

Open card sort — participants create their own groups and name them. Use this when you're redesigning from scratch or want to discover how users naturally think about your content without any constraints.

Closed card sort — participants sort cards into categories you've already defined. Use this when you have a proposed structure and want to see if it holds up.

Hybrid card sort — participants sort into your categories but can create new ones when nothing fits. This gives you the best of both worlds.

Step 2: Recruit Participants

For open sorts, 20-30 participants will surface reliable patterns. Closed sorts benefit from slightly more — 30-40 — since you're looking for statistical confidence in predefined categories.

The critical thing: recruit actual users, not your coworkers. Your colleagues already know where things live in your product. They have the curse of knowledge. You need people who represent your real user segments and experience levels.

Pull participants from your customer email list, a research panel, social media, or platforms like Respondent and Prolific.

Step 3: Set Up Study

Write instructions that are short and clear. Tell participants what they're doing, that there are no wrong answers, and roughly how long it will take (usually 10-12 minutes). Don't over-explain — it primes them to sort in particular ways.

Step 4: Launch & Monitor

Send out your study links and keep an eye on completion rates. If you notice a card that everyone seems confused by — look for unusually scattered placements — that's a sign the label needs work. Send a reminder after a few days, and close the study once you've hit your target number.

Phase 3: Analyze Results (1 day)

Step 1: Review Similarity Matrix

Your similarity matrix is the most valuable output. It shows how often each pair of cards was grouped together across all participants.

Look for dark clusters — pairs with 70%+ agreement. Those are your strong, natural groupings. Items under 40% probably don't belong together. Isolated cards that don't cluster with anything may need better labels or might belong in multiple places.

When you see two items grouped together by the vast majority of participants — like "My Orders" and "Order Tracking" — that's a clear signal they belong in the same category.

Step 2: Identify Popular Groupings

Look at the category names your participants created. Even when people use slightly different words, you'll see convergence. If most participants created some version of "My Account," "Help & Support," or "Shopping," those labels are worth paying attention to.

Where opinions split roughly 50/50, you might be dealing with ambiguous cards or genuinely different mental models across user segments. Dig into whether the split correlates with user type.

Step 3: Calculate Agreement Scores

A quick framework for interpreting your scores:

  • Above 70% — strong consensus. Build your IA around these groupings with confidence.
  • 50-70% — moderate agreement. Worth implementing, but consider whether different user segments see things differently. Sub-categories or cross-links might help.
  • Below 50% — no clear consensus. These cards might need rewording. Or the content itself might genuinely belong in multiple places, in which case search and tagging systems help more than hierarchical navigation.

Step 4: Look for Surprises

The most useful findings are the ones that surprise you. When users group "Returns" with "Shopping Cart" instead of "Support," they're telling you something important: they think about returns as part of the buying process, not as a support issue.

When developers put "API Docs" with "Code Examples" instead of "Technical Documentation," they're showing you they care about practical implementation, not reference material.

These unexpected groupings are where the real value lives. Pay attention to them.

Phase 4: Build IA (2-3 days)

Step 1: Create Category Structure

Draft your IA based on what the data is telling you. Aim for 4-7 top-level categories — that's roughly the limit of what people can scan and hold in working memory. Each category should be clearly distinct from the others, with a logical hierarchy from broad to specific.

For example, an e-commerce site might land on something like: Shop (products and categories), My Account (orders, settings, wishlist), Help (support, returns, shipping info), and Company (about, careers, press).

Step 2: Organize Content

Place each item based on the strongest grouping patterns from your data. Build out sub-categories where they make sense — Men's/Women's/Kids' under Shop, Orders & Tracking/Settings under My Account. Let the card sort results guide the hierarchy rather than your internal assumptions.

Step 3: Handle Edge Cases

Some items won't fit neatly into one category. That's normal. Place them where the majority of users put them, then add secondary access through search, cross-links, or contextual navigation. Whatever you do, don't create a catch-all "Other" or "Miscellaneous" category. That's a cop-out that pushes the organizational problem onto your users.

Step 4: Create IA Diagram

Map out the full structure visually using whatever tool your team prefers — Lucidchart, Miro, Figma, even a whiteboard. Include all hierarchy levels, clear labels, item counts per section, and the primary navigation paths. This becomes your reference document for stakeholder conversations and dev handoff.

Phase 5: Validate (1-2 days)

Card sorting tells you how people group things. It doesn't tell you whether they can actually find things in the structure you built from those groupings. That's why validation is non-negotiable.

Tree testing is the go-to method. Give users realistic tasks like "Find your past orders" or "Check the return policy" and measure whether they can navigate to the right place in your proposed structure. You want to see 80%+ task completion on critical paths.

You can also run a closed card sort using your proposed categories to see if the final structure still aligns with how people think. And prototype testing with 5-8 users will catch issues with labeling, layout, and flow that tree testing might miss.

If something doesn't test well, revise and test again. It's far cheaper to iterate now than after launch.

Real-World IA Examples

University Website Transformation

A university had organized its website around departments — Admissions Office, Registrar, Student Services, 25+ academic departments, Administration. Made sense internally. But students and prospective applicants don't think in terms of university departments. They think in terms of what they're trying to do.

Card sorting revealed task-oriented mental models. Users wanted to see: Apply & Admissions (undergraduate, graduate, international), Academics (programs, course catalog, calendar), Student Life (housing, activities, health), and a Current Students portal (registration, financial aid, grades).

After restructuring, the university saw noticeably higher task completion rates and a significant drop in calls to the admissions office — people were finding answers on their own.

SaaS Product Reorganization

A SaaS product had organized its navigation around internal feature categories — Data Management, Analytics Engine, Collaboration Tools, Configuration, APIs. This mirrored how the engineering team thought about the product, not how customers used it.

Card sorting revealed that users thought in terms of workflows, not features. The restructured navigation became: Projects (tasks, files, discussions), Insights (dashboards, reports, export), Team (members, roles, activity), Settings (account, integrations, API).

The result was faster onboarding and better feature discovery, because users could now find things where they expected them to be.

Common IA + Card Sorting Mistakes

Testing with Internal Teams

Your product team knows too much. Their mental models are shaped by how the system was built, not how it should be used. Always test with actual target users. This feels obvious, but the temptation to "just run a quick study with the team" is strong — and the results will mislead you.

Using Too Many Cards

Once you go past about 40 cards, participants get tired. They start sorting carelessly, and your data quality tanks. Keep it to 35 cards or fewer. A focused study with representative items will give you much better results than a comprehensive one that exhausts people.

Writing Vague Card Names

"Platform." "Solutions." "Resources." These labels are so generic that participants have no idea what they represent. The sorts become random, and you learn nothing. Be specific. "Dashboard" instead of "Platform." "Pricing Plans" instead of "Solutions." "Video Tutorials" instead of "Resources."

Skipping Validation

Card sorting and tree testing are complementary. One shows you how people group things; the other shows you whether people can find things. Running card sorting without follow-up tree testing is like writing code without testing it. You might get lucky, but you probably won't.

Ignoring Other Data Sources

Card sorting is powerful, but it's one input among several. Cross-reference your results with site analytics, user interviews, support ticket themes, and business requirements. When the card sort results contradict your analytics data, that's worth investigating — not ignoring.

Expecting Perfect Consensus

People are different, and that's fine. If 70%+ of participants agree on a grouping, that's excellent consensus. Between 50-70%, you've got a useful pattern with some variation — look at whether user segments explain the split. Under 50% doesn't mean your study failed. It means you've found an area where mental models genuinely diverge, and you may need cross-links, search, or multiple paths to the same content.

Advanced IA Techniques

Multi-dimensional IA. Your primary navigation should reflect the strongest mental model from your card sort. But not everyone thinks the same way, so add secondary access paths — search, filters, related links — for people who approach the content differently.

Personalized IA. If your card sort data shows clear differences between user segments (say, beginners vs. power users), consider adaptive navigation or alternative views for each group. Segment your card sort results by experience level and see if distinct patterns emerge.

Faceted IA. For content with lots of attributes — products, for instance — rigid hierarchies break down fast. Filtering and tagging let users combine dimensions like category, price, brand, and occasion in whatever way makes sense to them.

Task-based IA. Sometimes the best organizing principle isn't content type but user goals. Government websites are a classic example: "Renew Your License" is far more useful than making people navigate through "Department of Motor Vehicles" then "Forms" then "License Renewal."

Progressive disclosure. Show 4-7 main categories at the top level, then reveal sub-categories through mega menus or contextual navigation. This keeps the initial view clean while still giving access to deeper content. It respects the limits of attention without hiding things too deep.

Further Reading

Frequently Asked Questions

How many participants do I need for reliable card sorting results?

For open card sorts, 20-30 participants is the sweet spot. You'll see clear patterns emerge without diminishing returns. Closed sorts benefit from a few more — 30-40 — since you're validating specific categories. Going below these numbers tends to produce noisy results. Going significantly above them rarely changes the patterns.

What's the difference between card sorting and tree testing for information architecture?

Think of them as two halves of the same process. Card sorting helps you figure out how to organize your content — it's generative. Tree testing helps you check whether that organization actually works — it's evaluative. Card sorting says "users group these items together." Tree testing says "users can actually find this item in your proposed structure." You really want both.

How do I handle card sorting results that contradict business requirements?

This comes up all the time. The short answer: use your primary navigation for the user-centered categories that came out of your card sort, and add utility navigation (top bar, footer, sidebar) for business-driven items. Users navigate better when the main structure matches their mental model, and that improved experience tends to benefit your business metrics anyway.

Can I use card sorting results for mobile navigation design?

Yes, because the mental models card sorting reveals aren't device-specific. People group "My Orders" and "Order Tracking" together whether they're on a phone or a desktop. What changes is the presentation layer. Take your card sort groupings and adapt them for mobile constraints — hamburger menus, tab bars, progressive disclosure. The underlying structure stays the same.

How often should I repeat card sorting studies for my website?

There's no fixed schedule, but a few triggers should prompt a new study: major redesigns, significant additions to your content, or analytics showing that people can't find things. As a general rule, revisiting your IA every couple of years is smart, since user expectations shift and your content grows. High-traffic sites with rapidly changing content may want to check in more often.

Ready to Try It Yourself?

Start your card sorting study for free. Follow this guide step-by-step.