Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add liqiongyu/lenny_skills_plus --skill "competitive-analysis"
Install specific skill from multi-skill repository
# Description
Produce a Competitive Analysis Pack (competitive alternatives map, competitor landscape, differentiation & positioning hypotheses, battlecards, monitoring plan). Use for competitor research, competitive landscape, win/loss analysis, and positioning vs alternatives.
# SKILL.md
name: "competitive-analysis"
description: "Produce a Competitive Analysis Pack (competitive alternatives map, competitor landscape, differentiation & positioning hypotheses, battlecards, monitoring plan). Use for competitor research, competitive landscape, win/loss analysis, and positioning vs alternatives."
Competitive Analysis
Scope
Covers
- Mapping competitive alternatives (status quo, workarounds, analog/non-consumption, direct + indirect competitors)
- Building a competitor landscape grounded in customer decision criteria
- Turning analysis into actionable artifacts: positioning hypotheses, win themes, battlecards, and a monitoring plan
When to use
- “Do a competitive analysis / competitor landscape for our product.”
- “Why are we losing deals to
- “What are the real alternatives if we didn’t exist?”
- “Help us differentiate and position vs competitors.”
- “Create sales battlecards and win/loss takeaways.”
When NOT to use
- You need market sizing / TAM/SAM/SOM as the primary output (different workflow)
- You don’t know the target customer, core use case, or the decision this analysis should support
- You only need a quick list of competitors (no synthesis, no artifacts)
- You’re seeking confidential or non-public competitor information (do not attempt)
Inputs
Minimum required
- Product + target customer segment + core use case (what job is being done)
- The decision to support (e.g., positioning, sales enablement, roadmap bets, pricing, market entry)
- 3–10 known competitors/alternatives (or “unknown—please map them”)
- Any available evidence (links, win/loss notes, call transcripts, customer quotes, pricing pages, reviews)
- Constraints: geography, ICP, price band, compliance/regulation (if relevant), time box
Missing-info strategy
- Ask up to 5 questions from references/INTAKE.md.
- If answers aren’t available, proceed with explicit assumptions and label unknowns. Provide 2–3 plausible alternative scopes (narrow vs broad).
Outputs (deliverables)
Produce a Competitive Analysis Pack in Markdown (in-chat; or as files if requested):
1) Context snapshot (decision, ICP, use case, constraints, time box)
2) Competitive alternatives map (direct/indirect/status quo/workarounds/analog)
3) Competitor landscape table (top 5–10) with evidence links + confidence
4) Customer decision criteria + comparison matrix (customer POV)
5) Differentiation & positioning hypotheses (why win, why lose, proof points)
6) Win themes + loss risks (objections, landmines, traps)
7) Battlecards (3–5 priority competitors)
8) Monitoring plan (signals, cadence, owners, update triggers)
9) Risks / Open questions / Next steps (always included)
Templates: references/TEMPLATES.md
Workflow (8 steps)
1) Intake + decision framing
- Inputs: User context; references/INTAKE.md.
- Actions: Confirm the decision, ICP, use case, geography, and time box. Define what “good” looks like (who will use this and for what).
- Outputs: Context snapshot.
- Checks: A stakeholder can answer: “What decision will this analysis change?”
2) Map competitive alternatives (not just logos)
- Inputs: Use case + customer job.
- Actions: List what customers do instead: status quo, internal build, manual workaround, analog tools, agencies/outsourcing, and direct/indirect competitors. Identify the “true competitor” for the deal.
- Outputs: Competitive alternatives map + short notes per alternative.
- Checks: At least 1–2 non-obvious alternatives appear (workarounds / analog / non-consumption).
3) Select the focus set + collect evidence (time-boxed)
- Inputs: Alternatives map; available sources.
- Actions: Pick 5–10 focus alternatives (by frequency/impact). Gather publicly available facts (positioning, features, pricing, distribution, target ICP) and internal learnings (win/loss, sales notes). Track confidence and unknowns.
- Outputs: Evidence log + initial landscape table.
- Checks: Each competitor row has at least 2 evidence points (link/quote/data) or is explicitly labeled “low confidence”.
4) Build the comparison from the customer’s perspective
- Inputs: Focus set + evidence.
- Actions: Define 6–10 customer decision criteria (JTBD outcomes, constraints, trust, time-to-value, switching cost, price, ecosystem fit). Compare alternatives on criteria and surface “why they win”.
- Outputs: Decision criteria list + comparison matrix.
- Checks: Criteria are framed as customer outcomes/risks (not internal feature checklists).
5) Derive differentiation + positioning hypotheses
- Inputs: Matrix + wins/losses.
- Actions: Write 2–3 positioning hypotheses: (a) who we’re for, (b) the value we deliver, (c) why we’re different vs the true alternative, (d) proof points, (e) tradeoffs/non-goals.
- Outputs: Differentiation & positioning section.
- Checks: Each hypothesis names the competitive alternative it’s positioning against.
6) Translate into win themes + battlecards
- Inputs: Positioning hypotheses + competitor notes.
- Actions: Create 3–5 win themes and 3–5 loss risks. Produce battlecards for priority competitors (how to win, landmines, objection handling, traps to avoid).
- Outputs: Win/loss section + battlecards.
- Checks: Battlecards contain do/don’t talk tracks and are usable in a live sales call.
7) Recommend actions (product, messaging, GTM)
- Inputs: Findings.
- Actions: Propose 5–10 actions: product bets, messaging changes, pricing/packaging, distribution, partnerships, and “stop doing” items. Tie each action to a win theme or loss risk.
- Outputs: Recommendations list with rationale and owners (if known).
- Checks: Each recommendation is specific enough to execute next week/month.
8) Monitoring + quality gate + finalize
- Inputs: Draft pack.
- Actions: Define monitoring signals, cadence, and update triggers. Run references/CHECKLISTS.md and score with references/RUBRIC.md. Add Risks/Open questions/Next steps.
- Outputs: Final Competitive Analysis Pack.
- Checks: Pack is shareable as-is; assumptions and confidence levels are explicit.
Quality gate (required)
- Use references/CHECKLISTS.md and references/RUBRIC.md.
- Always include: Risks, Open questions, Next steps.
Examples
Example 1 (B2B SaaS): “We keep losing deals to Competitor X. Build a competitive alternatives map and a battlecard for X.”
Expected: alternatives map (incl. status quo), decision criteria, X battlecard, win themes/loss risks, and a monitoring plan.
Example 2 (Consumer subscription): “We’re repositioning for a new segment. Analyze alternatives and propose 2 positioning hypotheses.”
Expected: comparison matrix by customer criteria and two clear positioning options with proof points and tradeoffs.
Boundary example: “List every competitor in our industry worldwide.”
Response: narrow scope (ICP, geography, category) and propose a focused set + monitoring plan; otherwise output becomes a low-signal directory of logos.
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.