Maximum parallel execution mode with droid orchestration for high-throughput task completion
Activate maximum performance mode with parallel agent orchestration for high-throughput task completion
Reference for choosing between LLM tools (ask-llms, llm-council, remote-llm). Consult before querying multiple models.
Agentic Workflow Pattern
Agentic Workflow Pattern
Guide for integrating Agentica SDK with Claude Code CLI proxy
Guide for integrating Agentica SDK with Claude Code CLI proxy
|
Release preparation workflow - security audit β E2E tests β review β changelog β docs
Release preparation workflow - security audit β E2E tests β review β changelog β docs
Perform peak calling for ChIP-seq or ATAC-seq data using MACS3, with intelligent parameter detection from user feedback. Use it when you want to call peaks for ChIP-seq data or ATAC-seq data.
Token-efficient parallel execution mode using Haiku and Sonnet droids
Delegate coding tasks to OpenCode for background execution. Use when user says "delegate to opencode", "run in opencode", or wants to offload well-defined coding tasks to a cheaper model.
Memory consolidation and defragmentation for long-term memory maintenance. Use when asked to consolidate memories, defrag memory, run REM sleep, clean up memory files, or process session logs into...
Migration workflow - research β analyze β plan β implement β review
Migration workflow - research β analyze β plan β implement β review
Token-efficient parallel execution mode using Haiku and Sonnet agents
Goal-based workflow orchestration - routes tasks to specialist agents based on user goals
Goal-based workflow orchestration - routes tasks to specialist agents based on user goals
Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than...