3356 results (30.5ms) page 60 / 168
Yeachan-Heo / oh-my-claudecode-ultrawork exact

Activate maximum performance mode with parallel agent orchestration for high-throughput task completion

terry-li-hm / skills-llm-routing exact

Reference for choosing between LLM tools (ask-llms, llm-council, remote-llm). Consult before querying multiple models.

parcadei / continuous-claude-v3-agentic-workflow exact

Agentic Workflow Pattern

parcadei / continuous-claude-v3-agentica-claude-proxy exact

Guide for integrating Agentica SDK with Claude Code CLI proxy

parcadei / continuous-claude-v3-release exact

Release preparation workflow - security audit β†’ E2E tests β†’ review β†’ changelog β†’ docs

BIsnake2001 / chromskills-peak-calling exact

Perform peak calling for ChIP-seq or ATAC-seq data using MACS3, with intelligent parameter detection from user feedback. Use it when you want to call peaks for ChIP-seq data or ATAC-seq data.

MeroZemory / oh-my-droid-ecomode exact

Token-efficient parallel execution mode using Haiku and Sonnet droids

terry-li-hm / skills-opencode-delegate exact

Delegate coding tasks to OpenCode for background execution. Use when user says "delegate to opencode", "run in opencode", or wants to offload well-defined coding tasks to a cheaper model.

stewnight / rem-sleep-skill exact

Memory consolidation and defragmentation for long-term memory maintenance. Use when asked to consolidate memories, defrag memory, run REM sleep, clean up memory files, or process session logs into...

parcadei / continuous-claude-v3-migrate exact

Migration workflow - research β†’ analyze β†’ plan β†’ implement β†’ review

Yeachan-Heo / oh-my-claudecode-ecomode exact

Token-efficient parallel execution mode using Haiku and Sonnet agents

parcadei / continuous-claude-v3-workflow-router exact

Goal-based workflow orchestration - routes tasks to specialist agents based on user goals

zechenzhangAGI / ai-research-skills-tensorrt-llm exact

Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than...