Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add RefoundAI/lenny-skills --skill "usability-testing"
Install specific skill from multi-skill repository
# Description
Help users conduct effective usability testing. Use when someone is planning user tests, designing prototype validation, preparing usability studies, or trying to understand why users struggle with their product.
# SKILL.md
name: usability-testing
description: Help users conduct effective usability testing. Use when someone is planning user tests, designing prototype validation, preparing usability studies, or trying to understand why users struggle with their product.
Usability Testing
Help the user conduct effective usability testing using frameworks and insights from 11 product leaders.
How to Help
When the user asks for help with usability testing:
- Clarify the goal - Determine if they're validating a concept, finding friction points, or optimizing conversion
- Choose the right fidelity - Help them select between Wizard of Oz tests, fake doors, prototypes, or production testing
- Design the test - Guide them on recruiting users, creating scenarios, and what to observe
- Plan for iteration - Discuss how findings will flow back into the product development process
Core Principles
Fake it before you build it
Itamar Gilad: "Initially you fake it - fake door test, smoke test, Wizard of Oz tests. We showed the tabbed inbox working to people, but it wasn't really Gmail, it was just a facade." Validate core value propositions before writing production code using faked versions where humans perform the automated task behind the scenes.
Small samples reveal big friction
Melanie Perkins: "It's amazing how you can find 10 random people on the internet and they can give such astute feedback that's so representative for such a large number of people." Run tests with as few as 10 random people to identify core product issues.
Watch users, don't just ask them
Uri Levine: "Simply watch users and see what they're doing. If they're not doing what you expect, then ask them why." Direct observation reveals behaviors and needs that surveys miss. Ask 'why' when users deviate from the expected path.
Test multiple options, not one
Kristen Berman: "We never do a UX study where we're just showing people one thing. We always present multiple options and relatively look for which one drives the intended behavior." Single-design testing is ineffective for predicting behavior.
Overcome creator bias
Guillermo Rauch: "You tend to overrate how well your products work. It's very important to give your product to another person and watch them interact with it." Directly observing users helps overcome the tendency to think your product is more intuitive than it is.
Micro-level testing drives millions
Judd Antin: "We changed seven characters and made Airbnb millions of dollars because we found out the button felt scary." Don't dismiss usability testing as junior work; finding scary or confusing CTAs can massively impact conversion.
Progress through testing stages
Itamar Gilad: "Mid-level tests are about building a rough version - early adopter programs, alphas, longitudinal user studies, and fish food (testing on your own team)." Use a progression from fish fooding to dogfooding to alphas to increase confidence iteratively.
Make testing a team sport
Noah Weiss: "We had PMs, engineers, designers, and the user researcher all in one Slack thread live, responding and reacting to the usability session." Increase engagement by having cross-functional teams live-react to sessions in shared chat threads.
Questions to Help Users
- "What specific behavior are you trying to observe or validate?"
- "Do you need to validate the concept (use fake doors) or optimize the execution (use the real product)?"
- "How will you recruit users who have 'zero skin in the game' for honest feedback?"
- "Are you testing one option or multiple options to compare?"
- "What will you do with the findings - how will they flow back into development?"
- "Who else on the team should observe these sessions?"
Common Mistakes to Flag
- Testing only one design - Present multiple options to measure relative performance
- Building before validating - Use Wizard of Oz or fake door tests before writing production code
- Relying on internal intuition - Employees are too familiar with the product to spot real user friction
- Ignoring micro-level issues - Small copy changes and button labels can have massive business impact
- Testing in isolation - Bring engineers and designers into sessions to build shared understanding
Deep Dive
For all 14 insights from 11 guests, see references/guest-insights.md
Related Skills
- Customer Research
- Writing PRDs
- Shipping Products
- Designing Growth Loops
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.