404kidwiz

ux-researcher

6
0
# Install this skill:
npx skills add 404kidwiz/claude-supercode-skills --skill "ux-researcher"

Install specific skill from multi-skill repository

# Description

Expert in understanding user behaviors, needs, and motivations through qualitative and quantitative research methods to drive user-centered design.

# SKILL.md


name: ux-researcher
description: Expert in understanding user behaviors, needs, and motivations through qualitative and quantitative research methods to drive user-centered design.


UX Researcher

Purpose

Provides user experience research expertise specializing in qualitative and quantitative research methods to drive user-centered design. Uncovers user needs through interviews, usability testing, and data synthesis for actionable product insights.

When to Use

  • Planning and conducting user interviews or contextual inquiries
  • Running usability tests (moderated or unmoderated)
  • Analyzing qualitative data (thematic analysis, affinity mapping)
  • Creating artifacts like Personas, User Journey Maps, or Empathy Maps
  • Validating product market fit or feature demand
  • Designing surveys and analyzing quantitative responses

---

2. Decision Framework

Research Method Selection

What do you need to know?
│
├─ **Attitudinal** (What people say)
│  │
│  ├─ **Qualitative** (Why/How to fix)
│  │  ├─ Discovery Phase? → **User Interviews / Diary Studies**
│  │  ├─ Concept Phase? → **Focus Groups**
│  │  └─ Information Arch? → **Card Sorting**
│  │
│  └─ **Quantitative** (How many/How much)
│     ├─ General opinion? → **Surveys**
│     └─ Feature prioritization? → **Kano Analysis / MaxDiff**
│
└─ **Behavioral** (What people do)
   │
   ├─ **Qualitative** (Why it happens)
   │  ├─ Interface issues? → **Usability Testing (Moderated)**
   │  ├─ Context of use? → **Field Studies / Contextual Inquiry**
   │  └─ Navigation? → **Tree Testing**
   │
   └─ **Quantitative** (What happens)
      ├─ Performance? → **A/B Testing / Analytics**
      ├─ Ease of use? → **Unmoderated Usability Testing**
      └─ Attention? → **Eye Tracking / Heatmaps**

Sample Size Guidelines (Nielsen Norman Group)

Method Goal Recommended N Rationale
Qualitative Usability Find 85% of usability problems 5 users Diminishing returns after 5 users per persona.
User Interviews Identify themes/needs 5-10 users Saturation usually reached around 8-12 interviews.
Card Sorting Create information structure 15-20 users Needed for stable cluster analysis.
Quantitative Usability Benchmark metrics (Time on task) 20-40 users Statistical significance requires larger sample.
Surveys Generalize to population 100+ users Depends on margin of error desired (e.g., N=385 for +/- 5%).

Recruiting Strategy Matrix

Audience Difficulty Strategy
B2C (General Public) Low Testing Platforms (UserTesting, Maze) - Fast, cheap.
B2B (Professionals) Medium LinkedIn / Industry Forums - Offer honorariums ($50-$150/hr).
Enterprise / Niche High Customer Support / Sales Lists - Internal recruiting, leverage account managers.
Internal Users Low Slack / Email - "Dogfooding" or employee beta testers.

Red Flags → Escalate to product-manager:
- Research requested after code is fully written ("Validation theater").
- No clear research questions defined ("Just go talk to users").
- No budget for participant incentives (Ethical concern).
- Lack of access to actual end-users (Proxy users are risky).

---

3. Core Workflows

Workflow 1: Moderated Usability Testing

Goal: Identify friction points in a new checkout flow prototype.

Steps:

  1. Test Plan Creation

    • Objective: Can users complete a purchase as a guest?
    • Participants: 5 users who bought shoes online in last 6 months.
    • Scenarios:
      1. "Find running shoes size 10."
      2. "Add to cart and proceed to checkout."
      3. "Complete purchase without creating an account."
  2. Script Development

    • Intro: "We are testing the site, not you. Think aloud."
    • Tasks: Read scenario, observe behavior.
    • Probes: "I noticed you paused there, what were you thinking?" (Avoid "Did you like it?")
  3. Execution (Zoom/Meet)

    • Record session (with consent).
    • Take notes on: Errors, Success/Fail, Quotes, Emotional response.
  4. Synthesis

    • Log issues in a matrix: Issue | Frequency (N/5) | Severity (1-4).
    • Example: "3/5 users missed the 'Guest Checkout' button because it looked like a secondary link."
  5. Reporting

    • Create slide deck: "Top 3 Critical Issues" + Video Clips + Recommendations.

---

Workflow 3: Card Sorting (Information Architecture)

Goal: Organize a messy help center into logical categories.

Steps:

  1. Content Audit

    • List top 30-50 help articles (e.g., "Reset Password", "Pricing Plans", "API Key").
    • Write each on a card.
  2. Study Setup (Optimal Workshop / Miro)

    • Open Sort: Users group cards and name the groups. (Best for discovery).
    • Closed Sort: Users sort cards into pre-defined groups. (Best for validation).
  3. Execution

    • Recruit 15 participants.
    • Instruction: "Group these topics in a way that makes sense to you."
  4. Analysis

    • Look for standardization grid / dendrogram.
    • Identify strong pairings (80%+ agreement).
    • Identify "orphans" (items everyone struggles to place).
  5. Recommendation

    • Propose new Navigation Structure (Sitemap).

Workflow 4: Diary Study (Longitudinal Research)

Goal: Understand habits and context over 2 weeks.

Steps:

  1. Setup

    • Platform: dscout or WhatsApp/Email.
    • Instructions: "Log every time you order food."
  2. Prompts (Daily)

    • "What triggered you to order today?"
    • "Who did you eat with?"
    • "Photo of your meal."
  3. Analysis

    • Look for patterns over time (e.g., "Always orders pizza on Fridays").
    • Identify "tipping points" for behavior change.

---

Workflow 6: AI-Assisted User Research

Goal: Use AI to accelerate synthesis (NOT to replace empathy).

Steps:

  1. Transcription

    • Use Otter.ai / Dovetail to transcribe interviews.
  2. Thematic Analysis (with LLM)

    • Prompt: "Here are 5 transcripts. Extract top 3 distinct pain points regarding 'Onboarding'. Quote the users."
    • Human Review: Verify quotes match context. (LLMs hallucinate insights).
  3. Synthetic User Testing (Experimental)

    • Use LLM personas to stress-test copy.
    • Prompt: "You are a busy executive who skims emails. Critique this landing page headline."
    • Note: Use only for first-pass critique, never replace real users.

---

5. Anti-Patterns & Gotchas

❌ Anti-Pattern 1: Asking Leading Questions

What it looks like:
- "Do you like this feature?"
- "Would you use this if it were free?"
- "Is this easy to use?"
- "Don't you think this button is too small?"

Why it fails:
- Participants want to please the researcher (Social Desirability Bias).
- Future behavior doesn't match stated intent.
- Implies a "correct" answer.

Correct approach:
- "Walk me through how you would use this."
- "What are your thoughts on this page?"
- "On a scale of 1-5, how difficult was that task?"
- "What did you expect to happen when you clicked that?"

❌ Anti-Pattern 2: The "Focus Group" Trap

What it looks like:
- Putting 10 people in a room to ask about a UI design.
- Asking "Raise your hand if you would buy this."

Why it fails:
- Groupthink: One loud voice dominates.
- People don't use software in groups.
- You get opinions, not behaviors.
- Shy participants are silenced.

Correct approach:
- 1:1 Interviews for deep understanding.
- 1:1 Usability Tests for interaction feedback.
- Use groups only for ideation or understanding social dynamics.

❌ Anti-Pattern 3: "Users Don't Know What They Want" (The Henry Ford Fallacy)

What it looks like:
- Taking feature requests literally.
- User: "I want a button here to print PDF."
- Designer: "Okay, I'll add a print button."

Why it fails:
- The user is proposing a solution to a hidden problem.
- The actual problem might be "I need to share this data with my boss."
- A print button might be the wrong solution for a mobile app.

Correct approach:
- Ask "Why?" repeatedly.
- Uncover the underlying Job To Be Done (Sharing data).
- Design a better solution (e.g., Auto-email report, Live dashboard link) that might solve it better than a PDF button.

❌ Anti-Pattern 4: Validation Theater

What it looks like:
- Testing only with employees or friends.
- Testing after the code is shipped just to "check the box."
- Ignoring negative feedback because "users didn't get it."

Why it fails:
- Confirmation bias.
- Wasted resources building the wrong thing.

Correct approach:
- Test early with low-fidelity prototypes.
- Recruit external participants who don't know the product.
- Treat negative feedback as gold—it saves engineering time.

---

7. Quality Checklist

Research Rigor:
- [ ] Recruiting: Participants match the target persona (not just friends/colleagues).
- [ ] Consent: NDA/Consent forms signed by all participants.
- [ ] Bias Check: Questions are neutral and open-ended.
- [ ] Sample Size: Adequate N for the method used (e.g., 5 for Qual, 20+ for Quant).
- [ ] Pilot: Protocol tested with 1 pilot participant before full study.

Analysis & Reporting:
- [ ] Data-Backed: Every insight linked to evidence (quote, observation, video clip).
- [ ] Actionable: Recommendations are clear, specific, and prioritized.
- [ ] Anonymity: PII removed from shared reports.
- [ ] Triangulation: Mixed methods used where possible to validate findings.
- [ ] Video Clips: Highlight reel created for stakeholders.

Impact:
- [ ] Stakeholder Review: Findings presented to PM/Design/Eng.
- [ ] Tracking: Research recommendations added to Jira backlog.
- [ ] Follow-up: Check if implemented changes actually solved the user problem.
- [ ] Storage: Insights stored in a searchable repository (e.g., Dovetail, Notion).

Anti-Patterns

Research Design Anti-Patterns

  • Leading Questions: Questions that suggest answers - use neutral, open-ended questions
  • Convenience Sampling: Using readily available participants - match target persona
  • Small Sample Claims: Generalizing from small samples - acknowledge limitations
  • Confirmation Bias: Seeking only supporting evidence - actively seek disconfirming data

Analysis Anti-Patterns

  • Anecdotal Evidence: Over-relying on single quotes - triangulate across participants
  • Insight Overload: Too many insights without prioritization - focus on key findings
  • Analysis Paralysis: Over-analyzing without conclusions - iterate to insight
  • No Synthesis: Reporting without themes - synthesize into coherent narrative

Communication Anti-Patterns

  • Jargon Overload: Using academic terms - communicate in stakeholder language
  • Death by PowerPoint: Overwhelming presentations - focus on key insights
  • Insight Hoarding: Not sharing findings widely - democratize insights
  • No Action Link: Insights without recommendations - tie to product decisions

Process Anti-Patterns

  • Research in Vacuum: Not aligning with product goals - connect research to strategy
  • One-Shot Studies: No follow-up on recommendations - track impact
  • Siloed Research: Not building on previous research - maintain research repository
  • Timing Mismatch: Research too late to influence - integrate into product process

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.