Card Sorting With Your Team: A Quick Guide
How to run a fast, useful card sort with colleagues — when it works, when it doesn't, and how to get the most from an internal study.
Card Sorting With Your Team: A Quick Guide
You don't always need to recruit outside participants for a card sort. Sometimes your own team is the right group to sort with — especially if they're the ones who actually use the product, or if you just need a quick gut-check before running a bigger study.
Running a card sort internally is way faster than external recruitment, and it can give you genuinely useful results. But it only works well in certain situations. Here's when to do it, when to skip it, and how to get it right.
When Team Card Sorting Works
Not every internal study is a shortcut that leads nowhere. There are real situations where sorting with colleagues gives you good data.
Your team is the actual user base. If you're redesigning an internal tool, an intranet, or a company wiki, your coworkers aren't stand-ins for users — they are the users. HR folks sorting company intranet content is legitimate user research, full stop.
You need a quick read before committing to a full study. Early in a project, you often just want to see how people naturally group your content. A team card sort can give you working hypotheses in a day or two, instead of waiting a week or more for external recruitment.
You want to get stakeholders on the same page. Card sorting works surprisingly well as an alignment exercise. When product, engineering, and business folks all sort the same set of cards, disagreements become visible fast — and that's a much better starting point for discussion than a slide deck.
You want to pilot-test your study. Before sending a card sort to real participants, run it past a few teammates. They'll catch confusing card labels, broken links, and unclear instructions before you waste your research budget on a flawed study.
When Team Card Sorting Doesn't Work
Consumer-facing products. If you're building something for the public, your team knows too much. They understand the internal jargon, they know how the product is architected, and they carry assumptions that real customers simply don't have. The sorting patterns you'll get from your team will look noticeably different from what actual users produce.
Specialized audiences. If your product serves doctors, mechanical engineers, or any other specialized group, your marketing or product team can't replicate how those professionals think about information. You need people from that world.
How to Run a Team Card Sort
Step 1: Share the Study Link
Drop your card sort link in whatever Slack channel makes sense — #product, #design, #allhands — with a short message explaining why it matters. Something like:
"Running a quick study to improve our navigation redesign — would love 5 minutes of your time. No login needed. [link]"
Be specific about why you're doing this. People are more likely to help when they understand the purpose.
Step 2: Run a Workshop Session
If you really want good participation and richer insights, run the sort as part of a live session:
- Pull it up on a shared screen or have everyone open it on their own device
- Give people about 10 minutes to sort independently — no talking
- Pull up the results together using the analytics dashboard
- Talk through the areas where people disagreed
That post-sorting conversation is often more valuable than the data itself. You'll hear why people grouped things the way they did, which is hard to get from numbers alone.
Step 3: Send a Reminder
Slack messages get buried fast. If you shared the link asynchronously, send one follow-up a couple days later. Keep it light: "A few more responses would really help — takes 5 minutes!" A single nudge can meaningfully bump your participation numbers.
Combining Team and External Research
The most useful approach is often running both, back to back.
Phase 1 (team, day 1): Sort with 5-10 colleagues to get a baseline. See how they group things and flag any cards that seem confusing or poorly worded.
Phase 2 (external, days 2-7): Recruit 15-20 representative users through a platform like Prolific or from your existing user base. Use the same card set (with any fixes from Phase 1).
When both groups sort things similarly, you can feel confident about those patterns. When they diverge, that's actually the interesting part — it tells you where your team's assumptions don't match reality, and those gaps are usually worth investigating further.
Tips for Getting More Responses
- Keep it short. Aim for under 5 minutes. Stick to 20-30 cards at most. The longer your study takes, the more people will bail halfway through.
- Explain the "why." A message like "We're redesigning the help center based on support ticket data" gets way more responses than a vague "please take this survey."
- Don't make it mandatory. Forced participation leads to rushed, careless sorting. You want thoughtful responses, not checkbox completions.
- Go async when you can. Dropping a link in Slack respects people's schedules better than booking a meeting. Not everyone can carve out time at 2pm on a Tuesday.
Further Reading
- What is Card Sorting? Complete Guide
- Card Sorting (UX Glossary)
- Information Architecture (UX Glossary)
- How To Run Your First Card Sort Study
Frequently Asked Questions
How many team members should participate in a card sort? For internal tools, 5-10 people is usually enough. For a pilot test before external research, even 3-5 will do. Going bigger than that doesn't tend to improve the quality of your insights much — it just adds coordination overhead.
What's the difference between team card sorting and external user research? Team sorting taps into what your colleagues already know about the product and domain. External research tests your assumptions against people who don't have that insider context. For consumer products especially, you'll often see real users group things quite differently than your team did.
How long should a team card sort take to complete? Try to keep it under 5 minutes. A set of 20-30 cards hits a nice sweet spot — enough to learn something meaningful without losing people to fatigue or competing priorities.
When should you skip team card sorting entirely? When your team doesn't resemble your actual users (most consumer products), when stakeholders have already made up their minds and won't act on the data, or when you don't have time to follow up with external validation afterward.
How do you analyze conflicting results between team and external card sorts? Look at the specific cards that landed in different groups. Talk to a few external participants about their reasoning. The places where internal and external sorts disagree are usually where your team's domain knowledge is creating blind spots — and those are often the most important things to fix.