Guides
10 min read

How to Run Your First Card Sort Study

New to card sorting? This beginner guide walks you through every step -- from creating cards to analyzing results -- so you get usable data on your first try.

CardSort TeamUpdated

How to Run Your First Card Sort Study

Card sorting is one of the simplest ways to find out how people actually think about your content. You give participants a set of items, they organize them into groups that make sense to them, and you learn something real about how your navigation should work. It's the kind of study you can set up in an afternoon and get genuinely useful results from — even if you've never done UX research before.

Difficulty: Beginner Time Required: 2-3 hours (including participant recruitment)

What You'll Need

Before you start, get these things together:

  • A list of 30-40 content items pulled from your actual site or app. These should be real things users encounter, not internal labels.
  • Access to 15-30 participants who roughly match your target audience.
  • A card sorting tool (we're biased, but Free Card Sort works great for this).
  • Clear goals — know what questions you're trying to answer before you build anything.

Set aside 2-3 hours total. Most of that is setup and waiting for responses to come in.

Step 1: Define Your Study Goals

This sounds obvious, but it's where most first-timers go wrong. If you start a card sort without a clear question, you'll end up with data you don't know what to do with.

Get specific. Write down the actual questions you need answered:

  • Which parts of your site confuse people when they're trying to find something?
  • How do your users naturally group your content? (It's probably not the way your team thinks about it.)
  • What words do your users use for categories?

Here's what a good goal looks like: "Figure out how users would organize our 35 help center topics so we can redesign the knowledge base navigation."

Notice that's specific. It's not "understand our information architecture" — that's too vague to be useful.

Step 2: Prepare Your Content Cards

Your cards need to represent real things from your actual product. Don't make up hypothetical items. Go pull labels, page titles, and feature names from your live site.

Aim for 30-40 items. Fewer than 20 and there's not enough complexity for patterns to emerge. More than 50 and people start giving up halfway through — completion rates drop off a cliff.

A few rules for writing good card labels:

  • Use the same language your users see in the interface, not what your team calls things internally
  • Keep labels short — 2-5 words is the sweet spot
  • Cut any duplicates or near-duplicates (they'll just muddy your data)
  • Mix in different content types so you're covering real user tasks

For a help center, your cards might look like: "Reset password," "Contact support," "Update billing info," "Download invoice," "Change notification settings." Simple, clear, real.

Step 3: Choose Your Card Sort Type

You've got three options, and the right one depends on where you are in the design process.

Open card sort — participants create their own categories from scratch. This is what you want when you're building something new or you genuinely don't know how users think about your content. It's the most common choice for a first study, and it's usually the right one.

Closed card sort — you give participants a fixed set of categories and they sort cards into them. Use this when you already have a navigation structure and want to check whether it actually works for people.

Hybrid sort — a mix of both. Participants get some preset categories but can also create their own. Useful, but adds complexity to your analysis.

If this is your first card sort, go with open. You'll learn the most about how people naturally think about your content.

Step 4: Set Up Your Study

Getting the setup right matters more than you'd think. Small things — confusing instructions, weirdly formatted cards — cause people to abandon the study or rush through it.

Here's the order to do things:

  1. Add your card items. Double-check the formatting is consistent (all lowercase, or all title case — pick one and stick with it).
  2. Write clear instructions. Tell participants exactly what to do. Something like: "Organize these help topics into groups that make sense to you. Name each group whatever feels right — there are no wrong answers."
  3. Add a short intro explaining why you're running the study. People complete things more often when they understand the purpose.
  4. Set up any demographic questions you need for segmenting results later.
  5. Do the sort yourself. This is non-negotiable. You'll catch problems — confusing cards, missing items, weird wording — that you'd never notice otherwise.

Step 5: Recruit and Run the Study

You need 15-30 participants. Fewer than 15 and your patterns won't be reliable. More than 30 is great if you can get them, but the returns diminish pretty quickly after that point.

Where you recruit matters. Try to reach people through the same channels your actual users come from — not just your coworkers. Share the study link through email lists, social media, in-app prompts, or wherever your users actually are.

A few practical tips for the recruitment phase:

  • Give it 7-10 days. Rushing recruitment means a smaller, less representative sample.
  • Check your completion numbers after the first couple of days. If nobody's finishing, something might be wrong with your study setup.
  • Send a reminder to people who started but didn't finish — a gentle nudge at the 3-day mark works well.
  • Keep an eye on who's participating. If all your responses are from one demographic, your results will be skewed.

Step 6: Analyze Your Results

This is where it gets interesting. You've got the data — now you need to figure out what it's telling you.

Start with the big picture. Look at which items people consistently grouped together. When 60% or more of participants put the same items in the same group, that's a strong signal you should pay attention to. When it's 80% or higher, that's about as clear as it gets — those items belong together.

Then look at the category names people created. You'll see clusters of similar names, and those are great candidates for your actual navigation labels. The words your users choose are almost always better than what your team would pick.

Don't skip the confusing items — the cards that ended up in different places for different people. Those are the ones that need the most design attention. Maybe they need to live in multiple places, or maybe the label is just unclear.

For example, if most participants grouped "Reset Password," "Change Password," and "Account Security" together, you've got a clear "Password & Security" category emerging. That's something you can actually build with.

Common Mistakes to Avoid

A few things trip up first-timers:

  • Too many cards. Going past 50 items is a recipe for abandoned studies and bad data. People get tired and start sorting randomly.
  • Internal jargon on cards. If your cards say "CRM integration settings" but your users think of it as "Connect to Salesforce," your results won't reflect reality.
  • Skipping the pilot test. Always do the sort yourself first. Always. Five minutes of testing saves days of wasted data.
  • Not enough participants. Ten people might feel like enough, but the patterns won't hold up. Push for at least 15.
  • Rushing the analysis. Don't just eyeball it. Look at actual agreement percentages before making decisions.
  • Ignoring minority groupings. When a smaller group of participants sorts things differently, that's not noise — it might represent a real user segment with different needs.

Tips for Success

Run a quick pilot with 3-5 people before you open it up to everyone. You'll almost always find something to fix.

Write your instructions in plain language. If a participant has to re-read your instructions to understand what you're asking, they're too complicated.

Pull your card content from real user behavior — analytics data, search logs, support tickets. Don't sit in a conference room guessing what users care about.

Give yourself enough time for recruitment. A week is the minimum; two weeks is better. And when you're done analyzing, document everything with screenshots and specific numbers. Stakeholders trust data they can see.

One more thing: pay attention to the unusual groupings, not just the obvious ones. Sometimes the most useful finding is that a subset of users thinks about your content in a completely different way than you expected.

Next Steps

Once you've got your results, the work shifts to implementation. Write up your findings with the key groupings and agreement levels. Share them with your team. Then build a prototype of the new structure and — ideally — run a follow-up closed card sort to validate it before anyone writes code.

Ready to try it? Create a free account and set up your first study in a few minutes.

Frequently Asked Questions

How many participants do I need for a reliable card sort study? Aim for 15-30 people. Below 15, you just don't see consistent patterns — the groupings jump around too much to trust. Above 30 is fine, but you'll hit diminishing returns pretty quickly. The extra recruitment effort usually isn't worth it unless you're segmenting by audience type.

What's the ideal number of cards to include in a card sort? Somewhere between 30 and 40 works best for most studies. With fewer than 20, there's not enough material for real patterns to show up. Past 50, people get overwhelmed and start dropping out or rushing through it. Keep it in that middle range and you'll get solid data.

How long should participants spend on a card sort study? Typically 15-30 minutes for a 30-40 card study. If your study is taking people longer than 45 minutes, you've probably got too many cards or your labels are confusing. Long studies lead to high drop-off and lower-quality sorting as people get fatigued.

When should I use an open card sort versus a closed card sort? Open sorts are best when you're starting fresh or you want to understand how people naturally organize your content — no guardrails, no bias. Closed sorts are for when you already have categories and you want to test whether they work. If this is your first study on a topic, open is almost always the right call.

How do I know if my card sort results are reliable enough to implement? The main thing to look at is agreement rates. If 60% or more of your participants put certain items together, that's a solid foundation for a design decision. When you see 80%+ agreement, you can be very confident those items belong in the same group. For items with low agreement, you'll want to dig deeper or run a follow-up study.

Ready to Try It Yourself?

Start your card sorting study for free. Follow this guide step-by-step.