Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add terry-li-hm/skills --skill "opencode-delegate"
Install specific skill from multi-skill repository
# Description
Delegate coding tasks to OpenCode for background execution. Use when user says "delegate to opencode", "run in opencode", or wants to offload well-defined coding tasks to a cheaper model.
# SKILL.md
name: opencode-delegate
description: Delegate coding tasks to OpenCode for background execution. Use when user says "delegate to opencode", "run in opencode", or wants to offload well-defined coding tasks to a cheaper model.
user_invocable: false
OpenCode Delegate
Delegate coding tasks to OpenCode (Gemini/GLM-powered) for background execution.
When to Use
- Cost optimization: Gemini is ~6x cheaper than Opus
- Parallel work: Run tasks in background while continuing conversation
- Well-defined tasks: Refactoring phases, code extraction, file reorganization
- Self-recoverable errors: Tasks where OpenCode can debug its own mistakes
When NOT to Use
- Tasks requiring user decisions mid-execution
- Errors needing external context (API keys, environment issues)
- Exploratory work where direction is unclear
- Tasks with heavy dependencies on conversation context
Commands
Run headless task (default: GLM-4.7 β unlimited quota)
opencode run -m opencode/glm-4.7 --title "Task Name" "Detailed prompt"
Run with Gemini 3 Flash (fallback for speed)
opencode run -m opencode/gemini-3-flash --variant high --title "Task Name" "Detailed prompt"
Resume a session
opencode -s <session-id>
# Or continue last session:
opencode -c
Find session IDs
/bin/ls -lt ~/.local/share/opencode/storage/session/
# Then read the session JSON:
cat ~/.local/share/opencode/storage/session/<project-hash>/*.json
Prompt Engineering for Delegation
Good delegation prompts include:
- Clear scope: "Execute Phase 2 from docs/plans/X.md"
- Specific deliverables: "Create these files: A, B, C"
- Verification command: "Verify by running: python -c '...'"
- Constraints: "Keep existing patterns", "Don't modify X"
Example Prompt
Execute Phase 2 from docs/plans/refactor-plan.md.
Your task: Extract routes into backend/routes/*.py
Create these files:
1. backend/routes/__init__.py - router exports
2. backend/routes/documents.py - /upload, /documents endpoints
3. backend/routes/system.py - /healthz, /stats endpoints
Then update main.py to import and include the routers.
Verify by running: python -c 'from backend.routes import documents_router'
Session Storage
Sessions are persisted at:
~/.local/share/opencode/storage/session/
βββ <project-hash>/
β βββ ses_<id>.json # Contains title, summary, timestamps
βββ global/
Session JSON includes:
- id: Session ID for resume
- title: Task name
- summary: {additions, deletions, files}
- time: {created, updated}
Model Selection
Use -m <model> and optionally --variant <level> to select model.
Default: GLM-4.7 (unlimited quota)
-m opencode/glm-4.7
- Terry has Coding Max annual plan (valid to 2027-01-28) β unlimited quota
- SWE-bench Multilingual: 66.7% β best for TC/SC/EN mixed codebases
- LiveCodeBench V6: 84.9 (beats Claude Sonnet 4.5)
- Preserved Thinking: keeps reasoning across agentic turns
- Good for: most tasks, bilingual projects, Chinese documentation
Fallback: Gemini 3 Flash (speed)
-m opencode/gemini-3-flash --variant high
- Released Dec 2025, "doctorate-level" reasoning
- 3x faster than GLM-4.7
- β οΈ Higher hallucination rate β verify outputs in accuracy-critical contexts
Available Models
| Model | SWE-bench | Use Case |
|---|---|---|
opencode/glm-4.7 β |
73.8% | Default β unlimited quota |
opencode/gemini-3-flash |
78.0% | Speed-critical tasks |
opencode/gemini-3-pro |
76.2% | More capable, slower |
opencode/claude-sonnet-4-5 |
77.2% | When you need Claude quality |
Model Selection Tips
- Default: GLM-4.7 (no cost, good quality)
- Speed-critical: Gemini 3 Flash
- Accuracy-critical banking: Verify outputs from Gemini (higher hallucination rate)
Variant Levels
--variant highβ More reasoning effort (recommended)--variant maxβ Maximum reasoning (slower, costlier)--variant minimalβ Fastest, least reasoning
Monitoring Background Tasks
When launched via Claude Code's Bash with run_in_background:
# Check progress
tail -f /private/tmp/claude-501/-Users-terry/tasks/<task-id>.output
# Or use TaskOutput tool with the task ID
Error Handling
OpenCode often self-recovers from errors by:
1. Reading error output
2. Editing the problematic code
3. Retrying
If it fails repeatedly on the same error, take over interactively.
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.