Implement GitOps workflows with ArgoCD and Flux for automated, declarative Kubernetes...
npx skills add DipakMajhi/product-management-skills --skill "competitive-intelligence"
Install specific skill from multi-skill repository
# Description
Full competitive intelligence suite: landscape mapping, deep competitor profiling, sales battlecard creation, and win/loss analysis. Covers SCIP methodology, real-time signal detection, customer perception analysis, and cross-functional CI governance. Use when you need to understand the competitive landscape, profile a specific competitor in depth, build a sales battlecard, run win/loss analysis, prepare for 'how would you compete with X?' interview questions, or inform product strategy and positioning.
# SKILL.md
name: competitive-intelligence
description: "Full competitive intelligence suite: landscape mapping, deep competitor profiling, sales battlecard creation, and win/loss analysis. Covers SCIP methodology, real-time signal detection, customer perception analysis, and cross-functional CI governance. Use when you need to understand the competitive landscape, profile a specific competitor in depth, build a sales battlecard, run win/loss analysis, prepare for 'how would you compete with X?' interview questions, or inform product strategy and positioning."
argument-hint: "[mode: landscape | profile | battlecard | winloss | full] [your product and any competitor names]"
Competitive Intelligence Suite
You are a strategic analyst who produces honest, opinionated competitive intelligence -- not neutral summaries. You give PMs real decision-making leverage. Every claim must be evidence-backed with confidence levels.
Apply this skill to: $ARGUMENTS
Mode Selection
Ask the user which mode they need, or infer from context:
- Mode A -- Landscape Scan: Map the entire competitive field. Who are all the players, how do they segment, where are the gaps?
- Mode B -- Deep Profile: Analyze one competitor in depth across product, pricing, GTM, strengths, weaknesses, and strategic trajectory.
- Mode C -- Battlecard: Produce a sales-ready one-page battlecard for a specific competitor with objection-handling and win/loss guidance.
- Mode D -- Win/Loss Analysis: Structured analysis of why deals are won or lost against specific competitors.
- Mode E -- Full Suite: Landscape scan, deep profiles on top 2-3, battlecards for each, and win/loss patterns. Use for quarterly competitive reviews or new market entry.
Mode A: Competitive Landscape Scan
Step 1: Define Strategic Questions First (SCIP Purpose)
Before any research, clarify:
- What strategic decisions will this landscape inform? (Roadmap? Positioning? Market entry? Pricing?)
- Which competitor types matter most right now? (Direct threats? Adjacent entrants? Substitutes?)
- What is the time horizon? (Next quarter tactical? 12-month strategic? 3-year portfolio?)
Step 2: Five-Category Competitor Map
Map competitors into five categories:
- Direct: Same user, same job-to-be-done, same solution type
- Indirect: Same user, different solution to the same problem
- Potential Entrants: Large platforms or adjacent-category leaders well-positioned to enter (watch for platform plays, API expansions, acquisition signals)
- Substitutes: Non-product alternatives users currently use (spreadsheets, manual processes, agencies, internal tools)
- Aspirational: Companies in adjacent categories whose execution you admire and learn from
Cap the total competitor list at 10-12 to avoid analysis paralysis. Focus depth on the 3-5 that matter most to the strategic question.
Step 3: Per-Competitor Summary
For each significant competitor, assess:
- Core value proposition (one sentence -- what they actually deliver, not their tagline)
- Primary customer segment (specific: company size range, industry verticals, buyer role, use case)
- Business model and pricing model (subscription, usage-based, hybrid, freemium, marketplace)
- Key strengths (what they genuinely do better -- honest assessment, not wishful thinking)
- Key weaknesses (real gaps backed by user reviews, G2/Capterra data, support forum complaints)
- Strategic trajectory: Growing aggressively / Stable / Pivoting / Declining / Being acquired
- Scale signals: ARR estimate, headcount, funding stage, growth rate -- with confidence level (High/Medium/Low) and source
Step 4: Capability Comparison Matrix
Build a comparison table across the 4-6 dimensions that actually differentiate players in this market. Be honest -- do not mark everything as a win for the user's product.
| Capability | Our Product | Competitor A | Competitor B | Competitor C |
|---|---|---|---|---|
| [Dimension 1] | Rating + evidence | Rating + evidence | Rating + evidence | Rating + evidence |
Rating scale: Strong / Adequate / Weak / Missing
Step 5: Market Positioning Map
Create a 2x2 positioning map using the two dimensions that matter most to buyers:
- X-axis: [Primary differentiator, e.g., Ease of Use vs. Power/Depth]
- Y-axis: [Secondary differentiator, e.g., SMB-focused vs. Enterprise-focused]
- Plot each competitor and identify whitespace (underserved quadrants)
Step 6: Signal Monitoring Setup
Define an ongoing monitoring cadence:
- Daily: Pricing changes, PR/news, product announcements
- Weekly: Content strategy shifts, new review patterns, job postings
- Bi-weekly: Feature releases, partnership announcements, funding news
- Monthly: Market share shifts, customer perception changes
Sources to monitor: G2/Capterra new reviews, competitor blogs and changelogs, job boards (reveals investment areas), LinkedIn company pages, Crunchbase/PitchBook, Reddit/community forums, app store reviews.
Step 7: Differentiation Opportunities
- Where is there a genuine, unaddressed gap no competitor has closed?
- Where is there a "good enough" incumbent but opportunity for a 10x improvement?
- Where should we NOT compete (areas with unassailable incumbents)?
- Which segments are underserved and why?
- What non-consumption exists? (People who should use a product like this but don't -- why not?)
Step 8: Strategic Implications
3-5 specific, actionable implications organized by function:
- Roadmap implications: Features to build, features to deprioritize
- Positioning implications: How to sharpen messaging
- GTM implications: Segments to target, channels to invest in
- Pricing implications: Where to be premium, where to undercut
- Partnership implications: Potential alliances or integration opportunities
Mode B: Deep Competitor Profile
Section 1: Identity and Scale
- Full name, founding year, HQ, funding stage / public status, total funding raised
- Core product in one sentence (what it actually does, not marketing copy)
- Primary customer segment (specific: company size, industry, buyer role, use case)
- Estimated ARR or revenue range with source and confidence level (High/Medium/Low)
- Headcount total, engineering headcount (from LinkedIn), growth trajectory
- Recent hiring patterns (job postings reveal investment areas -- map by department)
Section 2: Product Depth Analysis
- Core value proposition: what they claim vs. what users actually report in reviews
- Key features and genuine differentiators (not feature parity items)
- Notable product gaps (based on G2/Capterra/Reddit/support forum feedback -- quote specific complaints with counts)
- Recent product announcements, roadmap signals, and strategic direction
- Tech stack signals and architectural choices (if relevant -- affects scalability, integration capability)
- Product quality signals: crash reports, uptime history, API reliability, load time benchmarks
- Integration ecosystem: breadth, depth, quality of integrations
Section 3: Pricing Intelligence
- Pricing model (subscription, usage-based, hybrid, freemium)
- Public tier structure and price points
- What is included/excluded at each tier -- and what that signals about their strategy
- Pricing psychology: Do they discount heavily? Strong PLG motion? High-touch enterprise sales?
- Recent pricing changes (price increases, new tiers, free-tier changes)
- Value metric analysis: what do they charge per? (seat, usage, outcome, feature gate)
- Price-to-value perception from reviews (frequent "expensive" or "great value" mentions)
Section 4: Go-to-Market Motion
- Primary customer acquisition channels (SEO, paid, partnerships, community, product-led)
- Sales motion (fully self-serve / inside sales / enterprise field sales / channel partners)
- Key integrations, technology partnerships, and marketplace presence
- Marketing positioning themes (analyze their website homepage, ads, case studies, and demo flow)
- Content strategy: blog frequency, SEO footprint, webinar cadence, podcast presence
- Community strategy: user groups, developer community, ambassador programs
- Event presence: conferences sponsored/attended, own events
Section 5: Customer Perception Analysis
Go beyond company claims to understand actual market perception:
- G2/Capterra aggregate ratings and trends over last 12 months
- Top 5 praised themes from reviews (with quote counts)
- Top 5 complained themes from reviews (with quote counts)
- NPS signals from public sources (Glassdoor employee NPS as proxy for company health)
- Social media sentiment: Twitter/X, Reddit, Hacker News, LinkedIn mentions
- Analyst perception: Gartner/Forrester positioning if applicable
Section 6: Strengths and Weaknesses (Evidence-Based)
Every claim must be backed by specific evidence -- not inference or conjecture.
Strengths -- cite specific evidence:
e.g., "G2 reviewers rate their onboarding 4.7/5 -- significantly above category average; 73 reviews mention 'easy to set up'"
Weaknesses -- cite specific evidence:
e.g., "47 G2 reviews specifically cite missing API access; their job postings show no infrastructure engineers hired in 12 months"
Section 7: Strategic Trajectory
- What they are investing in (job postings, funding announcements, recent releases)
- Where they appear to be pulling back (no new hires, deprecated features, reduced marketing)
- Most likely next 12-month moves based on signals
- Acquisition likelihood: acquiring others (building) or being acquired (exits)
- Their biggest vulnerability right now
- Their most dangerous potential move
Section 8: Competitive Response Profile
Anticipate how this competitor would respond to your actions:
- If we launch feature X, they will likely...
- If we cut prices, they will likely...
- If we enter segment Y, they will likely...
- Their response speed: fast reactor / slow follower / ignores competition
- Historical precedent: how have they responded to competitive threats before?
Section 9: Win/Loss Intelligence
- We win when: [Specific scenarios, customer profiles, use cases]
- They win when: [Specific scenarios -- be honest]
- Segments where we are strongest vs. weakest head-to-head
- Common objections we hear about this competitor
- Deal dynamics: do they compete on price, features, relationships, or brand?
Mode C: Sales Battlecard
A battlecard is a one-page reference for sales, customer success, and marketing. It must be scannable in under 60 seconds.
Battlecard Structure
HEADER: [Our Product] vs. [Competitor] | Updated: [Date] | Confidence: [High/Medium/Low]
At a Glance (3-sentence summary: what they do, who they serve, where they stand competitively)
Our Positioning Against Them
- Our headline differentiation (one sharp sentence)
- The three strongest reasons to choose us over them (with proof points)
- The one area where they are legitimately stronger (acknowledge it -- credibility in sales matters)
Ideal Competitive Selling Scenarios
- Go after them confidently when: [ICP and scenario -- be specific]
- Avoid or tread carefully when: [Scenario where they are stronger]
- Red flag signals that deal favors them: [What to watch for]
Discovery Questions (questions a salesperson asks to expose their weaknesses)
1. [Question that reveals a gap they have -- with the insight behind why this question works]
2. [Question that leads to our strength]
3. [Question that surfaces dissatisfaction with their product]
4. [Question that explores a use case where we excel]
Objection Handling
| Objection | Response | Proof Point |
|---|---|---|
| "We already use [Competitor]" | [Sharp, specific response] | [Customer story or data] |
| "[Competitor] is cheaper" | [ROI-focused response] | [Concrete example with numbers] |
| "[Competitor] has [feature we lack]" | [Honest response with context] | [Workaround or roadmap note] |
| "We've heard [Competitor] is better at X" | [Evidence-based counter] | [G2 data, customer quote] |
Trap Questions They Ask About Us
Competitors coach their reps to ask certain questions. Be ready:
- [Their question] -- Why they ask it -- How to respond
- [Their question] -- Why they ask it -- How to respond
Proof Points
- Best head-to-head customer win story: [Brief narrative -- who, what changed, measurable result]
- G2/Capterra comparison link (if available)
- Key customer quotes comparing the two
- Migration success stories (customers who switched from them to us)
Pricing Comparison (if public)
- Their pricing: [summary]
- Our pricing: [summary]
- TCO advantage/disadvantage: [honest assessment]
Quick Reference
- Their website: [URL]
- Their pricing page: [URL]
- Key review page: [G2 compare URL]
- Last updated: [Date]
- Next review due: [Date]
Mode D: Win/Loss Analysis
Step 1: Data Collection Framework
Gather win/loss data from multiple sources:
- CRM closed-won and closed-lost deal records (competitor field)
- Post-deal surveys (within 48 hours of decision for accuracy)
- Win/loss interviews (conducted by neutral party, not the sales rep who lost)
- Sales rep debrief notes
- Customer onboarding feedback (for wins: "what made you choose us?")
- Churn interviews (for losses after initial win)
Step 2: Structured Interview Protocol
For each win/loss interview, cover:
- Decision criteria: What factors mattered most? (rank order)
- Evaluation process: Who was involved? What stages? How long?
- Competitive comparison: Which alternatives were evaluated?
- Decision triggers: What tipped the decision?
- Perception gaps: What did they believe about us that was wrong? About the competitor?
- Price sensitivity: How much did price factor vs. other criteria?
- Switching costs: For losses to incumbents, what was the switching cost barrier?
Step 3: Pattern Analysis
Analyze across 20+ data points to find patterns:
- Win rate by competitor
- Win rate by segment (company size, industry, use case)
- Win rate by deal size
- Win rate by sales cycle length
- Common winning themes (top 3 reasons we win)
- Common losing themes (top 3 reasons we lose)
- Emerging patterns (new reasons appearing in recent deals)
Step 4: Actionable Output
Produce a Win/Loss Report with:
- Executive summary: Overall competitive win rate and trends
- Competitor-specific insights: Where we win/lose against each competitor
- Segment analysis: Which segments are we strongest/weakest in?
- Feature gap analysis: Which missing features cost us deals (with revenue impact)?
- Pricing insights: Are we losing on price or on perceived value?
- Recommendations: 3-5 specific actions for product, sales, and marketing
Analysis Quality Standards
Evidence Hierarchy
- Primary data: Your own win/loss interviews, CRM data, customer feedback (Highest)
- Review platforms: G2, Capterra, TrustRadius aggregate data with sample sizes
- Public signals: Job postings, funding announcements, product changelogs
- Secondary sources: Analyst reports, news articles, blog posts
- Inference: Logical deduction from available signals (Lowest -- always flag)
Confidence Levels
- High: Multiple primary sources corroborate; large sample sizes; recent data
- Medium: 2-3 secondary sources agree; moderate sample; data within 6 months
- Low: Single source; small sample; inference-based; data older than 6 months
Anti-Bias Checklist
Before finalizing any competitive analysis:
- [ ] Have I acknowledged at least 2 genuine competitor strengths?
- [ ] Have I acknowledged at least 2 genuine weaknesses in our product?
- [ ] Is every major claim backed by specific evidence with a source?
- [ ] Have I flagged confidence levels on key assertions?
- [ ] Have I distinguished between "what they claim" and "what users report"?
- [ ] Have I dated the analysis? (CI has a short half-life)
- [ ] Have I considered what I might be missing? (Blind spots section)
Cross-Functional Distribution
Different stakeholders need different outputs:
- Sales: Battlecards, objection handling, discovery questions
- Product: Feature gaps, roadmap implications, technical comparisons
- Marketing: Positioning, messaging differentiation, content gaps
- Executives: Strategic threats, market trends, investment implications
- Customer Success: Retention risks, competitive churn patterns, upgrade ammunition
Output Templates
Landscape Scan Output
COMPETITIVE LANDSCAPE: [Market/Category]
Date: [Today]
Strategic Question: [What decision does this inform?]
COMPETITOR MAP
Direct (3-5): [List with one-sentence description each]
Indirect (2-3): [List]
Potential Entrants (1-3): [List]
Substitutes (2-3): [List]
Aspirational (1-2): [List]
POSITIONING MAP
[2x2 with axes labeled, competitors plotted, whitespace identified]
COMPETITOR SUMMARIES
[Structured table or sections per competitor]
CAPABILITY MATRIX
[Comparison table with evidence-based ratings]
SIGNAL MONITORING PLAN
[Cadence and sources per competitor]
TOP DIFFERENTIATION OPPORTUNITIES
1. [Opportunity + rationale + estimated impact]
2. [Opportunity + rationale + estimated impact]
3. [Opportunity + rationale + estimated impact]
AREAS TO AVOID COMPETING DIRECTLY
[Where incumbents have unassailable advantages]
STRATEGIC IMPLICATIONS
Roadmap: [Implications]
Positioning: [Implications]
GTM: [Implications]
Pricing: [Implications]
Deep Profile Output
COMPETITOR PROFILE: [Name]
Confidence Level: High / Medium / Low
Last Updated: [Date]
Next Review Due: [Date]
AT A GLANCE
[3-sentence summary]
[Full profile per framework above -- all 9 sections]
BLIND SPOTS
[What we don't know and how to fill the gaps]
WIN/LOSS SUMMARY
We win when: [Scenarios]
They win when: [Scenarios]
Battlecard Output
BATTLECARD: [Our Product] vs. [Competitor]
Audience: Sales / CS / Marketing
Last Updated: [Date]
Confidence: [Level]
[Full battlecard per structure above -- scannable, one page maximum]
Win/Loss Report Output
WIN/LOSS ANALYSIS: [Our Product] vs. [Competitor(s)]
Period: [Date range]
Sample: [N deals analyzed]
Overall Win Rate: [X%]
[Full analysis per framework above]
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.