Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add ramidamolis-alt/agent-skills-workflows --skill "ai-ml-expert"
Install specific skill from multi-skill repository
# Description
Expert in AI/ML with ALL MCP servers. Uses UltraThink for model analysis, Memory for experiment tracking, Context7 for ML docs, and NotebookLM for research papers.
# SKILL.md
name: ai-ml-expert
description: Expert in AI/ML with ALL MCP servers. Uses UltraThink for model analysis, Memory for experiment tracking, Context7 for ML docs, and NotebookLM for research papers.
π€ AI/ML Expert Skill (Full MCP Integration)
Master AI/ML engineer using ALL 11 MCP servers.
MCP AI/ML Arsenal
| MCP Server | AI/ML Role |
|---|---|
| UltraThink | Model analysis, hypothesis |
| Memory | Experiment tracking, results |
| Context7 | ML framework docs |
| NotebookLM | Research papers |
| MongoDB | Training data, metrics |
| Brave/Tavily | Latest AI research |
| Notion | ML documentation |
Advanced AI/ML Patterns
1. Experiment Tracking (Memory)
// Before experiment
await mcp_Memory_create_entities([{
name: `Experiment_${name}_${timestamp}`,
entityType: "Experiment",
observations: [
`Hypothesis: ${hypothesis}`,
`Model: ${modelArchitecture}`,
`Dataset: ${dataset}`,
`Hyperparams: ${JSON.stringify(hyperparams)}`,
`Status: Running`
]
}]);
// After experiment
await mcp_Memory_add_observations([{
entityName: `Experiment_${name}_${timestamp}`,
contents: [
`Results: Accuracy=${accuracy}, Loss=${loss}`,
`Training time: ${trainingTime}`,
`Conclusion: ${conclusion}`,
`Status: Completed`
]
}]);
// Link to best model
await mcp_Memory_create_relations([
{ from: `Experiment_${name}`, to: "Model_Best", relationType: "improved" }
]);
2. Model Architecture Design (UltraThink)
await mcp_UltraThink_ultrathink({
thought: `
# Model Architecture Design
## Problem Definition
- Task: ${taskType}
- Input: ${inputShape}
- Output: ${outputShape}
- Constraints: ${constraints}
## Architecture Options
### Option A: Transformer
- Pros: State-of-the-art, attention
- Cons: Compute intensive
- Params: ~${paramsA}M
### Option B: CNN + LSTM
- Pros: Faster inference
- Cons: Limited context
- Params: ~${paramsB}M
### Option C: Fine-tuned LLM
- Pros: Transfer learning
- Cons: Large model
- Params: ~${paramsC}B
## Recommendation
Given ${constraints}, choose ${chosen} because...
## Architecture Details
\`\`\`
${architectureDetails}
\`\`\`
`,
total_thoughts: 30,
confidence: 0.8
});
3. Research Paper Integration (NotebookLM)
// Research latest techniques
const research = await mcp_NotebookLM_ask_question(
`What are the state-of-the-art techniques for ${task} in 2026?
What are the key innovations?
What are the implementation considerations?`,
mlResearchNotebookId
);
// Synthesize with UltraThink
await mcp_UltraThink_ultrathink({
thought: `
# Research Analysis: ${task}
## Paper Findings
${research}
## Key Techniques
1. ${technique1}: ${description1}
2. ${technique2}: ${description2}
## Implementation Plan
1. ...
## Expected Impact
${expectedImpact}
`,
total_thoughts: 20
});
// Store in Memory
await mcp_Memory_create_entities([{
name: `Research_${topic}_2026`,
entityType: "Research",
observations: research.keyPoints
}]);
4. Training Metrics Pipeline (MongoDB)
// Store training metrics
await mcp_MongoDB_insert-many(
"ml", "training_logs",
epochs.map(e => ({
experiment_id: experimentId,
epoch: e.number,
train_loss: e.trainLoss,
val_loss: e.valLoss,
train_acc: e.trainAcc,
val_acc: e.valAcc,
learning_rate: e.lr,
timestamp: new Date()
}))
);
// Analyze training dynamics
const analysis = await mcp_MongoDB_aggregate(
"ml", "training_logs",
[
{ $match: { experiment_id: experimentId } },
{ $group: {
_id: null,
bestValLoss: { $min: "$val_loss" },
bestEpoch: { $min: { $cond: [{ $eq: ["$val_loss", { $min: "$val_loss" }] }, "$epoch", 999] } },
overfit_point: { /* detect val_loss increasing */ }
}}
]
);
5. Prompt Engineering for LLMs (Context7 + UltraThink)
// Get LLM documentation
const docs = await mcp_Context7_query-docs(
"/anthropic/claude",
"prompt engineering best practices"
);
// Design prompt with UltraThink
await mcp_UltraThink_ultrathink({
thought: `
# Prompt Engineering: ${task}
## Best Practices (from docs)
${docs.bestPractices}
## Prompt Design
### System Prompt
${systemPrompt}
### Few-Shot Examples
${examples}
### Output Format
${outputFormat}
## Expected Behavior
- Input: ${sampleInput}
- Output: ${expectedOutput}
## Testing Plan
1. Edge cases: ...
2. Adversarial inputs: ...
`,
total_thoughts: 20
});
Secret AI/ML Techniques
1. Experiment Comparison
// Compare experiments
const experiments = await mcp_Memory_search_nodes("Experiment");
await mcp_UltraThink_ultrathink({
thought: `
# Experiment Comparison
| Experiment | Model | Accuracy | Loss | Time |
|------------|-------|----------|------|------|
${experiments.map(e => `| ${e.name} | ${e.model} | ${e.acc} | ${e.loss} | ${e.time} |`).join("\n")}
## Best Performer: ${best.name}
## Key Differences: ${keyDifferences}
## Recommendation: ${recommendation}
`,
total_thoughts: 15
});
2. Hyperparameter Search Strategy
await mcp_UltraThink_ultrathink({
thought: `
# Hyperparameter Search Strategy
## Search Space
- Learning rate: [1e-5, 1e-2]
- Batch size: [16, 32, 64, 128]
- Hidden dim: [256, 512, 1024]
- Dropout: [0.1, 0.3, 0.5]
## Strategy Options
1. Grid Search: ${gridSearchCost} experiments
2. Random Search: ${randomSearchExpected}
3. Bayesian: ${bayesianApproach}
## Recommendation
Given ${computeBudget}, use ${chosen} with ${iterations} iterations
## Priority Order
1. Learning rate (most sensitive)
2. Hidden dim
3. Batch size
4. Dropout
`,
total_thoughts: 20
});
3. Data Quality Analysis
// Get data stats from MongoDB
const dataStats = await mcp_MongoDB_aggregate(
"ml", "training_data",
[
{ $group: {
_id: "$label",
count: { $sum: 1 },
avgLength: { $avg: { $size: "$features" } }
}},
{ $sort: { count: -1 } }
]
);
await mcp_UltraThink_ultrathink({
thought: `
# Data Quality Analysis
## Class Distribution
${dataStats.map(d => `- ${d._id}: ${d.count} (${d.percentage}%)`)}
## Imbalance Detection
- Ratio: ${maxClass}:${minClass} = ${ratio}
- Severity: ${severity}
## Recommendations
1. Balancing: ${balancingStrategy}
2. Augmentation: ${augmentationStrategy}
3. Sampling: ${samplingStrategy}
`,
total_thoughts: 15
});
4. Model Debugging Pipeline
// When model underperforms
await mcp_SequentialThinking_sequentialthinking({
thought: `
# Model Debugging
## Step 1: Verify Data Pipeline
- Check: Data loading correct?
- Check: Preprocessing consistent?
- Check: Labels aligned?
## Step 2: Check Gradients
- Vanishing gradients?
- Exploding gradients?
- Dead neurons?
## Step 3: Analyze Loss Curve
- Underfitting? (high train loss)
- Overfitting? (train/val gap)
- Learning rate issues?
## Step 4: Inspect Predictions
- Error analysis
- Confusion matrix
- Failure cases
`,
thoughtNumber: 1,
totalThoughts: 5
});
5. ML Documentation Generator
// Auto-generate ML documentation
const experiments = await mcp_Memory_search_nodes("Experiment");
const models = await mcp_Memory_search_nodes("Model");
await mcp_Notion_API-post-page({
parent: { database_id: mlDocsDbId },
properties: {
title: [{ text: { content: "ML Project Documentation" } }]
},
children: [
{ type: "heading_1", heading_1: { rich_text: [{ text: { content: "Experiments" } }] } },
...experiments.map(e => ({
type: "toggle",
toggle: {
rich_text: [{ text: { content: e.name } }],
children: e.observations.map(o => ({
type: "paragraph",
paragraph: { rich_text: [{ text: { content: o } }] }
}))
}
})),
{ type: "heading_1", heading_1: { rich_text: [{ text: { content: "Models" } }] } },
...models.map(m => ({
type: "toggle",
toggle: {
rich_text: [{ text: { content: m.name } }],
children: m.observations.map(o => ({
type: "paragraph",
paragraph: { rich_text: [{ text: { content: o } }] }
}))
}
}))
]
});
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.