How To
18 min read

Remote Card Sorting: Best Practices & Complete Guide (2026)

Learn how to run effective remote card sorting studies. Best practices for unmoderated card sorts, participant recruitment, and getting reliable results online.

CardSort TeamUpdated

Remote Card Sorting: Best Practices & Complete Guide

Remote card sorting lets participants organize your content into categories using an online tool, without a moderator in the room. It is much faster than running in-person sessions, you can reach a bigger and more diverse group of people, and the cost is a fraction of what you would spend on moderated studies. For most UX teams working on information architecture today, remote is the default approach -- and for good reason.

This guide covers everything you need to run a solid remote card sort: picking a tool, writing cards, recruiting participants, monitoring your study, cleaning up data, and making sense of the results.

Why Remote Card Sorting?

Remote and in-person card sorts produce very similar results. The big difference is logistics. Remote studies are cheaper, faster to set up, and you can get far more participants without the scheduling headaches.

Advantages Over In-Person

  • Faster results -- You can collect 30 responses in a couple of days, rather than spending weeks scheduling one-on-one sessions.
  • More participants -- No scheduling conflicts means you can easily get 20-40 people, compared to the 5-10 that is typical for moderated studies.
  • Lower cost -- No travel, no facility rental. You are mostly paying for incentives, and those are often smaller for remote.
  • Geographic diversity -- You can reach users anywhere in the world, not just people near your office.
  • Natural environment -- Participants sort cards in their own space, not a lab. That is closer to how they would actually use your product.
  • Less bias -- No moderator in the room means no one accidentally nudging participants toward a "right" answer.

When Remote Doesn't Work

Remote is not always the right call. If your content is complex enough that participants need real-time explanation, if you are dealing with highly confidential material, or if your users are not comfortable with online tools, you will run into trouble. The same goes for studies where you need to ask follow-up questions in the moment.

A good middle ground is a hybrid approach: run the card sort remotely, then follow up with video interviews for a handful of participants.


Remote vs In-Person Card Sorting

FactorRemote (Unmoderated)In-Person (Moderated)
Setup time5-10 minutes2-4 hours per session
Participants20-40 easily5-10 typical
Cost$50-$300$500-$2,000
Timeline2-4 days1-2 weeks
Moderator biasNonePresent
Follow-up questionsNot possibleCan ask during
Completion rate70-85%~100%
Data qualityHigh (with good design)High (with good moderation)

Bottom line: Start with remote. Only go in-person if you specifically need real-time follow-up or think-aloud data.


Setting Up Remote Card Sorting

Setup is where most remote studies succeed or fail. Since there is no moderator to bail you out, everything -- your cards, your instructions, your tool -- needs to work on its own. Most study failures trace back to poor setup, not bad participants.

Step 1: Choose Your Tool

You want a tool that is mobile-friendly, does not require participants to create an account, randomizes card order automatically, saves progress, shows results in real time, and lets you export data.

Must-have features:

  • Mobile-friendly interface
  • No login required for participants
  • Automatic randomization
  • Progress saving
  • Real-time results
  • Export to CSV

Recommended: CardSort -- it checks all these boxes, is free for 3 studies, and takes about 5 minutes to set up.

Alternatives: Optimal Workshop (full-featured but expensive), UsabilityHub (more basic), UserZoom (enterprise-oriented).

Step 2: Create Clear Cards

This is the single most important thing you can do for data quality. Remote participants cannot ask "What does this card mean?" -- so if a card is vague, they will either guess or give up.

Good card names (clear, specific):

- Track My Order
- Return an Item
- Update Payment Method
- Contact Customer Support
- View Order History

Bad card names (vague, confusing):

- Tracking
- Returns
- Payment
- Support
- History

Quick test: Show your cards to a colleague who is not involved in the project. If they are confused by any card, your participants will be too.

Step 3: Write Foolproof Instructions

Since no one is there to answer questions, your instructions need to anticipate everything. Studies with clear, thorough instructions consistently get better completion rates.

Your instructions should include:

  1. Welcome and context (1-2 sentences about why you are doing this)
  2. The task (what exactly you want them to do)
  3. Time estimate (so they know what they are committing to)
  4. Reassurance (there are no right or wrong answers)
  5. A thank you

Here is a template that works well:

Welcome! Thank you for participating.

We're redesigning [Product Name] to make it easier to use.

YOUR TASK:
Please organize these [items] into groups that make sense to you.
Create category names that describe each group.

This will take about 10 minutes.

There are no right or wrong answers—we want to understand how
YOU think about these items.

Your input will help make [Product] better for everyone.

Thank you!

Test your instructions with 2-3 people before you go live. If anyone is confused, rewrite.

Step 4: Optimize for Mobile

A large chunk of your participants will do this on their phone. If the experience is clunky on mobile, they will bail.

  • Test the study on your own phone before launching
  • Keep card names short (2-5 words)
  • Cap it at 30-40 cards -- mobile fatigue is real
  • Pick a tool with a proper mobile UI
  • Avoid images unless they are essential (they slow things down)

Recruiting Remote Participants

Who you recruit matters more than how many you recruit. Twenty responses from your actual target users will tell you more than fifty from random people.

How Many Participants?

For an open card sort, aim for 20-30 participants. Patterns usually start to emerge around 15-20, and you get solid confidence by 25-30. Going above 30 gives you diminishing returns.

For a closed card sort, you want a bit more -- 30-40 participants -- since you are validating a hypothesis and need stronger statistical backing.

As a rule of thumb: more is better, but you will see most of the important patterns by 25.

Where to Find Participants

Option 1: Your Customer Base (Best option)

Your own users are the gold standard. They understand your product, they represent real usage, and they are usually willing to help for free or with a small incentive.

How to reach them:

  • Email your customer list
  • Put an in-app banner
  • Include an invite in post-purchase flows
  • Ask your customer success team to reach out

Option 2: Research Panels

  • UserTesting.com -- Fast turnaround (hours), good screeners. Expensive at $30-$100 per response.
  • Respondent.io -- Great for B2B. $50-$200 per participant.
  • Prolific.co -- Academic quality, more affordable ($5-$10 per response). Mostly consumer audiences though.

Option 3: Social Media

Free or cheap, and you can reach niche audiences. The downside is sample bias -- your followers may not represent your actual users.

Best platforms: LinkedIn for B2B, Reddit for niche communities, Twitter/X for the design/UX crowd, Facebook groups for consumer products.

Option 4: Friends & Family (Last resort)

Only do this if you are testing very general concepts, cannot access real users, and have zero budget. Be aware that results will be less reliable -- people who know you tend to try to "help," which introduces bias.

Incentives

Whether you need incentives depends on who you are recruiting and how long the study takes.

You probably need incentives if you are recruiting strangers, the study takes more than 15 minutes, you are targeting busy professionals, or you want a high completion rate.

You might not need them if you are reaching out to engaged customers, the study is under 10 minutes, and your users genuinely care about improving the product.

Study LengthCustomer ListGeneral Public
5-10 min$0-$5$5-$10
10-15 min$5-$10$10-$20
15-20 min$10-$15$20-$30

Amazon gift cards are the easiest option. Product discounts work well for customers. Charity donations appeal to some audiences. Raffles work if your budget is tight.

Screening Participants

Screen for what actually matters -- do not over-do it. Asking too many screening questions tanks your response rate.

Useful things to screen for:

  • Experience level: New users vs. power users. You might want a mix, or you might want to analyze them separately.
  • Device: Desktop, mobile, or both -- especially if device type affects how people think about your content.
  • Demographics: Only if it is genuinely relevant to your product.

Running the Remote Study

Pre-Launch Checklist

Before you send the link to anyone:

  • Complete the study yourself on desktop
  • Complete it on your phone
  • Have 2-3 colleagues go through it
  • Confirm all cards are clear
  • Verify instructions make sense
  • Double-check the study link works
  • Prepare your recruitment message

Launching the Study

Do a soft launch first. Send the link to about 5 participants, check their responses, and make sure nothing is broken or confusing. Most problems show up in the first handful of responses. Fix anything that needs fixing, then open it up to everyone else.

If you skip the soft launch and send to all participants at once, keep a close eye on the first 10 responses. If something is off, pause and fix it before more people waste their time.

Monitoring Responses

Check in on your study every day. Look at:

  • How many people have finished
  • Your completion rate (started vs. finished)
  • Average completion time
  • Whether any cards seem to be causing confusion
  • Where people are dropping off

Red flags to watch for:

  • Completion rate under 60% -- Something is wrong. Could be the study is too long, instructions are unclear, or there is a technical issue.
  • Average time under 5 minutes (for 30-40 cards) -- People are rushing through it. You may need to filter out low-quality responses.
  • Average time over 20 minutes -- The study is too long. Too many cards, or cards that are hard to understand.
  • Everyone creating the exact same categories -- Your cards might be too obvious. Consider whether you need more nuanced content.
  • No patterns at all -- Cards are unclear, or participants do not understand the task.

Sending Reminders

A well-timed reminder can make a real difference in your completion numbers. Send one reminder 3-5 days after the initial invite, targeting people who clicked the link but did not finish.

Subject: Quick reminder: Help us improve [Product]

Hi [Name],

A few days ago, we invited you to participate in a 10-minute
study to help improve [Product].

We'd love your input! Your perspective will help make [Product]
better for everyone.

[Study Link]

This should take about 10 minutes. Thank you!

[Your Name]

Do not send reminders if you already have enough responses, the person opted out, or more than 7 days have passed.


Ensuring Data Quality

Not every response is worth keeping. Cleaning your data before analysis makes a noticeable difference in how clear your patterns are.

Spot Bad Responses

Watch for these signs:

  1. Completion time under 3 minutes (for 30-40 cards) -- They almost certainly rushed or clicked randomly.
  2. All cards dumped into 1-2 categories -- They were not really engaging with the task.
  3. Nonsensical category names -- Things like "asdf" or "Group 1" or "stuff."
  4. Duplicate participants -- Same IP address, same completion pattern.
  5. Alphabetical grouping -- A telltale sign of minimum effort.

Filter Out Bad Data

Most card sorting tools let you review individual responses, mark them as invalid, and exclude them from your analysis. Use this.

One important note: do not remove a response just because it is different from the majority. Unusual groupings can be genuinely insightful. Only remove responses that are clearly junk.

Improve Response Quality

You can head off quality problems before they happen:

  • Before the study: Write clear instructions, include a time estimate, reassure people there are no wrong answers.
  • During the study: Make sure there is a progress indicator, let people save and come back later, and ensure the mobile experience is good.
  • In recruitment: Target engaged participants, offer appropriate incentives, and screen for relevant experience.

Analyzing Remote Results

Step 1: Review Completion Metrics

Before you dig into the actual card groupings, check the basics. How many people started vs. finished? What was the average completion time? Was there a specific point where people dropped off?

Good benchmarks: 70-85% completion rate, 8-15 minutes average time (for 30-40 cards), and less than 15% drop-off.

Step 2: Examine Similarity Matrix

The similarity matrix shows you which cards participants kept grouping together. Look for:

  • Dark clusters (high agreement) -- These are strong, natural groupings. They translate well into navigation categories.
  • Light areas (low agreement) -- Weak relationships. These cards do not belong together in most people's minds.
  • Isolated cards -- Cards that do not fit neatly anywhere. You may need to rethink them.

Most card sorting platforms generate the similarity matrix automatically.

Step 3: Identify Popular Groupings

Look at what categories people created. How many categories did participants make on average? What names came up most often? Were there any groupings that surprised you?

Typically you will see 4-7 main categories with decent agreement on the core groupings. Some variation is normal and healthy.

Step 4: Calculate Agreement

Agreement scores tell you how confident you can be in each grouping:

  • High agreement (over 70%) -- Strong consensus. You can build on this with confidence.
  • Moderate agreement (50-70%) -- A general pattern, but with variation. Worth looking at whether different user segments see things differently.
  • Low agreement (under 50%) -- No real consensus. Investigate further -- the card might be unclear, or the content might genuinely be hard to categorize.

Step 5: Document Insights

Write up your findings while they are fresh. A good findings doc includes:

  1. Overview -- Participant count, study type, dates.
  2. Key findings -- Top 3-5 patterns, surprising insights, problem cards.
  3. Recommendations -- Proposed structure, what to validate next, follow-up research needed.
  4. Appendix -- Individual responses, full similarity matrix, participant comments.

Remote Card Sorting Mistakes

Mistake #1: Too Many Cards

Sixty-plus cards means a 25+ minute study. People get tired, start rushing, and your data suffers. Keep it to 30-50 cards. If you have more content than that, split it across multiple studies.

Mistake #2: Unclear Instructions

Without a moderator to clarify, confusing instructions lead to confused participants. Some will guess, others will quit. Test your instructions with 3 people before you launch.

Mistake #3: No Mobile Testing

A big portion of your responses will come from phones. If you have not tested the experience on mobile, you are probably losing participants and skewing your results toward desktop users.

Mistake #4: Wrong Participants

Recruiting friends and family feels easy, but it gives you results that do not reflect your actual users' mental models. Even a small group of real target users will give you better data than a large group of the wrong people.

Mistake #5: Not Monitoring in Real Time

If you wait until the study closes to look at the data, you will miss problems that could have been fixed early. Check your first 5-10 responses and address issues immediately.

Mistake #6: Ignoring Drop-Offs

A completion rate under 70% means something is wrong, and it also means your results are biased toward the most motivated participants. Investigate and fix the cause.

Mistake #7: Over-Interpreting Noise

One person who grouped things oddly is not a reason to rethink your entire navigation. Focus on patterns that show up across the majority of participants. Outliers are worth noting, but do not let them drive your decisions.


Advanced Remote Techniques

Segmented Analysis

If you suspect that different user groups think about your content differently, analyze their results separately. Compare new users vs. power users, different geographic regions, desktop vs. mobile, or any other segmentation that matters for your product.

Hybrid Remote-Moderated

Run the card sort remotely, then follow up with 5-8 participants over video. Ask them to walk you through their groupings and explain their thinking. You get the statistical power of a large remote sample plus the depth of qualitative interviews.

A/B Testing Card Sets

Not sure whether your card labels are clear enough? Run two studies at the same time with different card names and compare the results. This is a great way to figure out which labels create the clearest mental models.

Multi-Round Studies

For thorough IA work, run studies in sequence:

  1. Open sort to discover how users naturally categorize your content
  2. Closed sort to validate the categories you came up with
  3. Tree test to check whether people can actually find things in your proposed structure

You can get through all three rounds in 2-3 weeks.


Remote Card Sort Checklist

Setup Phase

  • Choose a reliable tool (test on mobile!)
  • Create 30-50 clear, specific cards
  • Write foolproof instructions
  • Test with 3 colleagues
  • Recruit 20-40 target participants
  • Set up incentives (if needed)

Launch Phase

  • Soft launch to 5 participants first
  • Monitor first responses
  • Fix any issues immediately
  • Send to all participants
  • Set a reminder for day 3-5

Monitoring Phase

  • Check responses daily
  • Track completion rate (aim for over 70%)
  • Watch for confusing cards
  • Remove obviously bad responses
  • Send reminders after 3-5 days

Analysis Phase

  • Review similarity matrix
  • Identify popular groupings
  • Note surprising findings
  • Calculate agreement scores
  • Document top 3-5 insights
  • Create recommendations

Frequently Asked Questions

How long should participants have to complete a remote card sort?

Give people 5-7 days. In practice, most responses roll in within the first 2-3 days, and things taper off after day 5. A single reminder on day 3 -- specifically to people who started but did not finish -- will bump your numbers up noticeably.

What should I do if my completion rate is low (under 60%)?

Something is broken and you need to figure out what. The usual suspects: the study takes too long, the instructions are confusing, there is a technical glitch, or you recruited the wrong people. Look at the first 5-10 responses for clues, then fix the problem before recruiting more participants.

Can I run card sorting studies internationally with remote participants?

Absolutely -- that is one of the big advantages of going remote. Just be thoughtful about time zones when you send invitations, and translate your cards and instructions if your participants speak different languages. Keep in mind that cultural differences can affect how people categorize things.

What if my remote card sort results show no clear patterns or agreement?

Low agreement usually means one of three things: your cards are unclear, the content is genuinely ambiguous (in which case search might be more important than navigation), or you do not have enough responses yet. Try getting to 35-40 participants before giving up. If patterns still are not emerging at that point, the content itself may need rethinking.

How do remote card sorting results compare to in-person studies for accuracy?

Very comparable. Remote data tends to be slightly noisier since there is no moderator keeping people on track, but you make up for that with larger sample sizes. Twenty to forty remote participants will usually give you more reliable patterns than five to ten people in a lab.


Related Resources


Ready to run your remote card sort?Start free study

Ready to Try It Yourself?

Start your card sorting study for free. Follow this guide step-by-step.