Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add namesreallyblank/Clorch --skill "orchestration"
Install specific skill from multi-skill repository
# Description
MANDATORY - Load this skill first. Routes to bootstrap + domain rules.
# SKILL.md
name: orchestration
description: MANDATORY - Load this skill first. Routes to bootstrap + domain rules.
impact: CRITICAL
version: 3.0.0
invokes:
- rlm-process
Clorch -- Claude Orchestration
─────────────────◆─────────────────
░█████╗░██╗░░░░░░█████╗░██████╗░░█████╗░██╗░░██╗
██╔══██╗██║░░░░░██╔══██╗██╔══██╗██╔══██╗██║░░██║
██║░░╚═╝██║░░░░░██║░░██║██████╔╝██║░░╚═╝███████║
██║░░██╗██║░░░░░██║░░██║██╔══██╗██║░░██╗██╔══██║
╚█████╔╝███████╗╚█████╔╝██║░░██║╚█████╔╝██║░░██║
░╚════╝░╚══════╝░╚════╝░╚═╝░░╚═╝░╚════╝░╚═╝░░╚═╝
─────────────────◆─────────────────
Step 1: Load Bootstrap (ALWAYS)
Read bootstrap.md first. It contains:
- Role detection (Orchestrator vs Worker)
- Iron Law + Iron Claw (UNBREAKABLE)
- Tool ownership table
- Worker preamble templates
- Memory recovery protocol
Workers: Load bootstrap.md, execute task, return.
Step 2: Detect Task Type
| Request Pattern | Domain |
|---|---|
| "build", "implement", "add feature" | software-development |
| "fix", "debug", "bug" | software-development |
| "refactor", "clean up" | software-development |
| "review PR", "security audit" | code-review |
| "explore", "understand codebase" | research |
| "write tests", "add coverage" | testing |
| "document", "README" | documentation |
| "deploy", "CI/CD", "pipeline" | devops |
| "analyze data", "chart" | data-analysis |
| "plan", "roadmap" | project-management |
| "trading", "chart", "market", "stock", "crypto" | trading-analysis |
| "UI", "UX", "design", "frontend", "CSS", "accessibility", "performance", "responsive" | frontend-development |
Step 3: Load Rules JIT
| Trigger | Load |
|---|---|
| After compact detected | rules/memory-recovery.md |
| Planning task decomposition | rules/swarm-patterns.md |
| Choosing which agent | rules/agent-routing.md |
| Large-context tasks | rules/rlm-routing.md |
| External/current data needed | rules/mcp-integration.md |
| Token pressure | rules/cost-management.md |
| Managing multi-agent sessions | rules/context-management.md |
| Multi-task coordination | rules/task-coordination.md |
| After 5+ agent tasks | rules/rlm-learning-synthesis.md |
| User requests synthesis | rules/rlm-learning-synthesis.md |
Rule Index
| Rule | Impact | Purpose |
|---|---|---|
| scope-discipline.md | UNBREAKABLE | Iron Claw details |
| core-identity.md | CRITICAL | Clorch identity |
| user-control.md | CRITICAL | Confirmation points |
| worker-preamble.md | CRITICAL | Spawn templates |
| memory-recovery.md | CRITICAL | Post-compact recovery |
| task-coordination.md | HIGH | Claude Code Tasks integration |
| swarm-patterns.md | HIGH | Task decomposition |
| context-management.md | HIGH | Multi-agent sessions |
| thread-pattern.md | HIGH | Output filtering |
| agent-routing.md | HIGH | Agent selection |
| rlm-routing.md | HIGH | Large-context routing |
| rlm-learning-synthesis.md | HIGH | Periodic learning synthesis |
| mcp-integration.md | MEDIUM | External data |
| cost-management.md | MEDIUM | Token optimization |
| GITIGNORE.md | LOW | Project gitignore patterns |
Domain References
| Domain | Path |
|---|---|
| Software | references/domains/software-development.md |
| Code Review | references/domains/code-review.md |
| Research | references/domains/research.md |
| Testing | references/domains/testing.md |
| Documentation | references/domains/documentation.md |
| DevOps | references/domains/devops.md |
| Data | references/domains/data-analysis.md |
| Planning | references/domains/project-management.md |
| Trading | references/domains/trading-analysis.md |
| Frontend | references/domains/frontend-development.md |
Auto-Learning Protocol
After any agent completes a non-trivial task, automatically capture learnings to build institutional memory.
Worker-Level Learning
Workers now auto-capture learnings via the preamble template. The orchestrator
should ALSO capture its own coordination-level learnings (decisions about which
agents to use, workflow routing choices, patterns in user requests).
Learning happens at TWO levels:
1. Worker level - What the agent learned during execution (codebase patterns, solutions, errors)
2. Orchestrator level - What the coordinator learned about task routing and workflow effectiveness
When to Capture
Capture learnings after:
- Successful implementations (new features, refactorings)
- Architectural or design decisions
- Error resolution or debugging
- Discovery of codebase patterns
- Failed approaches (to avoid repeating)
Skip for: Simple file reads, quick lookups, trivial one-line fixes.
What to Extract
For each learning capture:
1. What was the task? (Brief description)
2. What approach was used? (Strategy, tools, techniques)
3. What decisions were made and why? (Key choices and rationale)
4. What worked / what didn't? (Outcomes, gotchas, surprises)
5. Key file paths and patterns (With line numbers if relevant)
Capture Format
cd /Users/lexudo/.claude/opc && PYTHONPATH=. uv run python scripts/core/store_learning.py \
--session-id "<task-identifier>" \
--type <TYPE> \
--content "<what was learned>" \
--context "<task context>" \
--tags "tag1,tag2,tag3" \
--confidence high|medium|low
Learning Types
| Type | When to Use |
|---|---|
WORKING_SOLUTION |
After successful implementations, fixes that worked |
ARCHITECTURAL_DECISION |
After design choices, system structure decisions |
FAILED_APPROACH |
After something didn't work (avoid repeating) |
CODEBASE_PATTERN |
After discovering patterns in the code |
ERROR_FIX |
After resolving specific errors or bugs |
Capture Rules
- Default behavior: Auto-learning is ON by default. No opt-in needed.
- Timing: Capture AFTER agent returns results, not during execution
- File paths: Include absolute paths with line numbers in content
- Tags: Always tag with the skill/workflow that triggered the task (e.g., "orchestration", "kraken", "build-workflow")
- Confidence: Use
highfor verified solutions,mediumfor working approaches,lowfor experimental findings
Example
cd /Users/lexudo/.claude/opc && PYTHONPATH=. uv run python scripts/core/store_learning.py \
--session-id "hook-auto-build-impl" \
--type WORKING_SOLUTION \
--content "TypeScript hooks require npm install in .claude/hooks/ before build.sh compiles TS to JS in dist/. The build.sh script handles compilation automatically. Hook errors often come from missing dependencies or path issues." \
--context "Implementing auto-build for TypeScript hooks in hook development workflow" \
--tags "hooks,typescript,build,orchestration" \
--confidence high
───◆─── Routing Ready ───◆───
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.