How to Conduct a Remote Card Sort Study
Remote card sorting lets you test with participants anywhere. Follow this guide to set up, recruit, and analyze a remote card sort study that delivers results.
How to Conduct a Remote Card Sort Study
Remote card sorting is exactly what it sounds like: instead of sitting in a room with participants while they sort index cards on a table, you send them a link and they do it from wherever they are. It's the same research method, just online. Participants categorize your content items into groups that make sense to them, and you get data about how real people think about your information.
The big advantage? You're not limited to whoever can show up at your office. You can recruit people from anywhere, collect results automatically, and spend your time on analysis instead of logistics.
Difficulty: Intermediate Time Required: 2-3 hours (setup) + 1-2 weeks (running the study)
What You'll Need
- A remote card sorting tool (like CardSort, OptimalSort, or UserZoom)
- A list of content items or concepts to sort (usually 30-60 items)
- Access to your target user group
- Clear research objectives
- A way to compensate participants (optional but recommended)
- 1-2 weeks for data collection
Step 1: Define Your Research Objectives
Before you create a single card or recruit anyone, figure out what you're actually trying to learn. This sounds obvious, but skipping it is one of the most common reasons card sort studies produce data nobody can use.
Write down 2-3 specific questions you want answered. Good objectives look like this:
- "How do our customers group our product features?"
- "What category labels make sense to first-time users?"
- "Does our current navigation match how people actually think about our content?"
Be specific about your user group, the content area you're exploring, and what design decisions you'll make with the results. If you can't explain how the findings will change something, you're not ready to run the study yet.
Step 2: Prepare Your Cards
Your cards are the core of the study, so spend real time on them. Aim for 30-60 items that represent the content, features, or concepts you want users to organize.
A few ground rules:
- One concept per card. Don't cram multiple ideas together.
- Use your users' language, not internal jargon. If your team calls it "asset lifecycle management" but your users call it "tracking my stuff," go with what users say.
- Keep the specificity level consistent. Don't mix broad categories like "Produce" with specific items like "Milk" -- it confuses participants and muddies your data.
Going past 60 cards tends to wear people out. They start rushing, sorting carelessly, or just abandoning the study. On the flip side, fewer than 30 cards usually doesn't give you enough data to spot meaningful patterns. The sweet spot is somewhere in between.
Step 3: Choose and Set Up Your Remote Tool
Pick a platform based on what matters most to you. CardSort works well if you're watching your budget. OptimalSort has strong analytics. UserZoom is built for larger research operations. They all get the job done.
Once you've picked your tool:
- Create an account and start a new study
- Decide between an open sort (participants create their own categories) or a closed sort (you provide the categories)
- Enter your card items
- Write clear introduction text that explains what you're asking participants to do
- Add any follow-up questions you want to ask
- Test the whole thing yourself before you send it to anyone
That last point matters more than you'd think. Walk through the study as if you were a participant. Check for typos, confusing instructions, and anything that feels awkward. Five minutes of testing saves you from collecting bad data.
Step 4: Recruit Appropriate Participants
You want 15-30 participants who actually resemble your real users. Recruiting more than you'd need for an in-person study is smart here -- some people will start and never finish, and you have no way to nudge them in the moment like you would face-to-face.
Here's how to find the right people:
- Define your target user segments with specific criteria
- Build a short screener to filter out people who don't match
- Recruit through services like UserInterviews or Respondent, or tap your own user base and social channels
- Offer fair compensation -- $20-50 is typical depending on the study length
- Be honest about how long it takes (usually 15-20 minutes)
Going below 15 participants makes it hard to trust the patterns you see. And recruiting people who don't match your actual users leads to an information architecture built for the wrong audience.
Step 5: Launch and Monitor Your Study
Don't just send the link and forget about it. Keep an eye on things.
- Send each participant a unique link
- Check your completion rates daily
- After about 3 days, send a friendly reminder to anyone who hasn't finished
- Review the first few submissions to make sure the data looks right -- if people are confused by your instructions, you'll see it early
- Answer participant questions quickly (within 24 hours if possible)
If participation is lagging halfway through, a short reminder email with a small deadline extension usually helps. People get busy. A gentle nudge goes a long way.
Step 6: Analyze Your Results
This is where it gets interesting. Most card sorting tools will generate similarity matrices or dendrograms for you, which show how often participants grouped certain items together.
Focus on:
- Strong groupings -- items that most participants put together. These are your clearest signals about how users think.
- Scattered items -- cards that ended up in lots of different categories. These highlight genuine confusion or content that doesn't fit neatly anywhere.
- Category labels -- in open sorts, look at what participants named their groups. When many people use similar labels, that's a strong hint for your navigation language.
- Demographic differences -- if you collected demographic data, see whether different user segments sorted things differently.
For example, if the vast majority of participants put "Monthly Budget Template" and "Expense Tracker" in the same group, that's a clear signal those belong together. But if an item gets spread across four or five different categories, it needs further investigation -- maybe the label is unclear, or the concept genuinely spans multiple areas.
Step 7: Apply Findings to Your Design
The whole point of this exercise is to make your product easier to navigate. Here's how to put your findings to work:
- Build or restructure your information architecture around the groupings that had strong consensus
- Use the category labels your participants created -- their words are almost always clearer than whatever your team came up with internally
- For items with low agreement, dig deeper. Maybe the label needs work, or maybe it belongs in multiple places
- Document your findings and the decisions you made so stakeholders (and future you) understand the reasoning
- Plan follow-up research like tree testing to validate your new structure before building it
Card sorting tells you how people think about your content. It doesn't tell you whether they can actually find things. That's why tree testing or usability testing afterward is so valuable.
Tips and Best Practices
- Pilot test your study with 2-3 people before the full launch. You'll catch problems you never noticed.
- Keep card labels short -- five words or fewer. Long labels slow people down and make sorting feel like a chore.
- Include a brief practice round with 3-4 sample cards so participants understand the mechanics before they start.
- Give people 1-2 weeks to complete the study. That's enough time without letting it drag on.
- Look at both the numbers and the labels. Quantitative patterns and qualitative category names together give you the full picture.
Common Mistakes to Avoid
- Cramming in too many cards. Past 60, quality drops off fast as participants lose patience.
- Using internal jargon that makes sense to your team but not your users.
- Recruiting too few participants. Under 15 and you're basically guessing.
- Skipping the pilot test. Even a quick run-through catches embarrassing mistakes.
- Setting tight deadlines. Less than a week and you'll lose a lot of potential responses.
- Not following up with participants who started but didn't finish.
- Treating card sort data as gospel. It's directional guidance, not a blueprint. Always validate with additional research.
Next Steps
- What is Card Sorting? Complete Guide
- Card Sorting (UX Glossary)
- Information Architecture (UX Glossary)
- How To Run Your First Card Sort Study
Once you have your card sort results, the natural next step is to validate them. Run a tree test using your new structure to see if people can actually find things. If that goes well, prototype the navigation and run usability tests to watch how people interact with it in context. Keep your findings documented -- they'll be useful reference points for future design decisions.
Create a free account on CardSort to set up your first remote card sorting study. And remember, card sorting works best as part of a broader research toolkit. Pair it with interviews, usability testing, and tree testing for a well-rounded understanding of how your users think.
Frequently Asked Questions
How many participants do I need for a remote card sort study? Somewhere between 15 and 30 is the typical recommendation. You want enough people to see real patterns emerge, and you need to account for the fact that not everyone who starts will finish. Remote studies tend to need a slightly larger pool than in-person ones since you can't keep people on track in real time.
What's the optimal number of cards for remote card sorting? Stick to the 30-60 range. Too few and you won't have enough data to draw conclusions. Too many and people get tired, rush through, or quit partway. If you're struggling to get under 60, consider splitting into two smaller studies focused on different content areas.
How long should participants be given to complete a remote card sort? One to two weeks works well for most studies. People have busy schedules, and giving them a reasonable window means more completed responses. Anything shorter than a week tends to hurt participation noticeably.
What completion rate should I expect for remote card sorting? If your study is well-designed and you're sending reminders, expect roughly 60-80% of invited participants to finish. If you're seeing much less than that, something is off -- maybe the instructions are confusing, the incentive isn't compelling enough, or there's a technical hiccup.
How do I know if my card sort results are reliable? Look for consistency. When a strong majority of participants group the same items together, you can be fairly confident that reflects a real mental model. When items end up scattered across many different groups, that's telling you something too -- those items are genuinely ambiguous and worth investigating further.