Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add ramidamolis-alt/agent-skills-workflows --skill "prompt-master"
Install specific skill from multi-skill repository
# Description
Ultimate prompt engineering with MCP integration. Expert in crafting prompts that leverage UltraThink, SequentialThinking, and multi-model orchestration. Use for any LLM interaction optimization.
# SKILL.md
name: prompt-master
description: Ultimate prompt engineering with MCP integration. Expert in crafting prompts that leverage UltraThink, SequentialThinking, and multi-model orchestration. Use for any LLM interaction optimization.
📝 Prompt Engineering Master (MCP-Enhanced)
Master prompt engineer with deep MCP multi-model orchestration.
MCP-Enhanced Prompting
UltraThink Deep Reasoning
mcp_UltraThink_ultrathink({
thought: `
## Problem Analysis
${problemDescription}
## Approach Options
1. ${option1} - Pros/Cons
2. ${option2} - Pros/Cons
3. ${option3} - Pros/Cons
## Evaluation Criteria
- Correctness: ...
- Performance: ...
- Maintainability: ...
## Recommendation
Based on analysis...
`,
total_thoughts: 50,
confidence: 0.85,
assumptions: [
{ id: "A1", text: "Requirement is stable", critical: true }
],
uncertainty_notes: "Need to verify X with user"
})
SequentialThinking Step-by-Step
mcp_SequentialThinking_sequentialthinking({
thought: `
Step 1: Understand the input format
- Expected: JSON with fields A, B, C
- Edge cases: null, empty, malformed
Step 2: Validate input
- Check required fields
- Type validation
- Range checks
Step 3: Transform data
...
`,
thoughtNumber: 1,
totalThoughts: 10,
isRevision: false,
needsMoreThoughts: true
})
Advanced Prompt Patterns
1. Multi-Stage Reasoning
Stage 1: Research (Context7 + Brave)
├── mcp_Context7_query-docs(lib, "pattern")
└── mcp_Brave_brave_web_search("best practices 2026")
Stage 2: Analysis (UltraThink)
└── mcp_UltraThink_ultrathink({
thought: "Given research findings...",
total_thoughts: 30
})
Stage 3: Verification (SequentialThinking)
└── mcp_SequentialThinking_sequentialthinking({
thought: "Verifying conclusion...",
thoughtNumber: 1,
totalThoughts: 5
})
2. Confidence-Gated Prompting
const result = await mcp_UltraThink_ultrathink({
thought: initialAnalysis,
total_thoughts: 10,
confidence: null // Let it self-assess
});
if (result.confidence < 0.7) {
// Low confidence - need more research
const moreContext = await Promise.all([
mcp_Context7_query-docs(lib, query),
mcp_Brave_brave_web_search(query),
mcp_NotebookLM_ask_question(query, nb_id)
]);
// Re-analyze with more context
await mcp_UltraThink_ultrathink({
thought: `With additional context: ${moreContext}...`,
total_thoughts: 20,
confidence: null
});
}
3. Branching Exploration
// Main analysis
const main = await mcp_UltraThink_ultrathink({
thought: "Approach A analysis...",
total_thoughts: 15,
session_id: "analysis-session"
});
// Branch for alternative
const branch = await mcp_UltraThink_ultrathink({
thought: "What if we tried approach B instead?",
total_thoughts: 10,
session_id: "analysis-session",
branch_from_thought: 5,
branch_id: "alternative-B"
});
// Compare and select
await mcp_UltraThink_ultrathink({
thought: `Comparing:
- Approach A (confidence: ${main.confidence})
- Approach B (confidence: ${branch.confidence})
Winner: ...`,
total_thoughts: 5
});
4. Assumption Tracking
await mcp_UltraThink_ultrathink({
thought: "Designing auth system...",
total_thoughts: 20,
assumptions: [
{
id: "A1",
text: "Database supports row-level security",
critical: true,
confidence: 0.9,
verifiable: true
},
{
id: "A2",
text: "Users have unique email addresses",
critical: true,
confidence: 1.0
},
{
id: "A3",
text: "Performance < 100ms acceptable",
critical: false,
confidence: 0.8
}
],
depends_on_assumptions: ["A1", "A2"]
})
// Later, if assumption proven false:
await mcp_UltraThink_ultrathink({
thought: "A1 was wrong! Database doesn't support RLS...",
invalidates_assumptions: ["A1"],
total_thoughts: 10
});
Prompt Templates with MCP
Technical Decision
await mcp_UltraThink_ultrathink({
thought: `
# Decision: ${decisionTitle}
## Context
${background}
## Options Evaluated
| Option | Pros | Cons | Risk | Effort |
|--------|------|------|------|--------|
| A | ... | ... | Low | Medium |
| B | ... | ... | Medium | Low |
| C | ... | ... | High | High |
## Research Findings
- Context7: ${docsFindings}
- Brave: ${webFindings}
- Memory: ${pastExperience}
## Recommendation
Option: X
Rationale: ...
## Implementation Plan
1. ...
2. ...
3. ...
`,
total_thoughts: 25,
confidence: 0.85
})
Code Review
await mcp_SequentialThinking_sequentialthinking({
thought: `
# Code Review: ${fileName}
## Step 1: Security Analysis
- [ ] SQL injection risks
- [ ] XSS vulnerabilities
- [ ] Auth bypass possible?
- [ ] Sensitive data exposure
## Step 2: Performance Check
- [ ] N+1 queries
- [ ] Missing indexes
- [ ] Memory leaks
- [ ] Unnecessary complexity
## Step 3: Best Practices
- [ ] Error handling complete
- [ ] Logging appropriate
- [ ] Types correct
- [ ] Tests adequate
## Findings
${analysisResults}
`,
thoughtNumber: 1,
totalThoughts: 5
})
Debugging Analysis
// First, check Memory for similar bugs
const pastBugs = await mcp_Memory_search_nodes("bug similar error");
await mcp_UltraThink_ultrathink({
thought: `
# Debug Analysis: ${errorMessage}
## Error Context
- File: ${fileName}
- Line: ${lineNumber}
- Stack: ${stackTrace}
## Similar Past Bugs
${pastBugs.map(b => `- ${b.name}: ${b.solution}`).join('\n')}
## Hypothesis Generation
1. Hypothesis A: ${hypothesis1}
- Evidence for: ...
- Evidence against: ...
- Test: ...
2. Hypothesis B: ${hypothesis2}
- Evidence for: ...
- Evidence against: ...
- Test: ...
## Most Likely Cause
${conclusion}
## Fix Approach
${fixPlan}
`,
total_thoughts: 20,
confidence: 0.7
})
Secret Prompting Techniques
1. Persona Anchoring + MCP
"You are a world-class security expert.
Analyze using Memory MCP for past vulnerabilities,
Context7 for security documentation,
UltraThink for deep reasoning."
2. Chain-of-Verification
// Generate → Verify → Refine
const answer = await mcp_UltraThink_ultrathink({...});
const verification = await mcp_SequentialThinking_sequentialthinking({
thought: `Verifying: ${answer.conclusion}
- Check 1: Logic sound?
- Check 2: Evidence supports?
- Check 3: Edge cases covered?`
});
if (!verification.passed) {
// Refine with new constraints
}
3. Multi-Model Consensus
// Use different MCP servers for same problem
const ultraThinkAnswer = await mcp_UltraThink_ultrathink({...});
const sequentialAnswer = await mcp_SequentialThinking_sequentialthinking({...});
const notebookAnswer = await mcp_NotebookLM_ask_question(query, nb);
// Synthesize consensus
4. Progressive Disclosure
// Start simple, go deep if needed
let thoughts = 5;
let result = await mcp_UltraThink_ultrathink({
thought: query,
total_thoughts: thoughts
});
while (result.confidence < 0.8 && thoughts < 50) {
thoughts += 10;
result = await mcp_UltraThink_ultrathink({
thought: "Going deeper: " + query,
total_thoughts: thoughts,
needsMoreThoughts: true
});
}
5. Context Maximization
// Before complex prompt, load maximum context
const docs = await mcp_Context7_query-docs(lib, query); // 50K tokens
const memory = await mcp_Memory_search_nodes(topic);
const research = await mcp_NotebookLM_ask_question(topic, nb);
// Now prompt with full context
await mcp_UltraThink_ultrathink({
thought: `Given:
- Documentation: ${docs}
- Past knowledge: ${memory}
- Research: ${research}
Analyze and recommend...`,
total_thoughts: 30
});
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.