7993 results (55.6ms) page 18 / 400
Kalyanikhandare29 / agent-skills-for-context-engineering-multi-agent-patterns exact

This skill should be used when the user asks to "design multi-agent system", "implement supervisor pattern", "create swarm architecture", "coordinate multiple agents", or mentions multi-agent...

mikeyobrien / ralph-orchestrator-pr-demo exact

Use when creating animated demos (GIFs) for pull requests or documentation. Covers terminal recording with asciinema and conversion to GIF/SVG for GitHub embedding.

mikeyobrien / ralph-orchestrator-create-hat-collection exact

Generates new Ralph hat collection presets through guided conversation. Asks clarifying questions, validates against schema constraints, and outputs production-ready YAML files.

mikeyobrien / ralph-orchestrator-tui-validate exact

Validates Terminal User Interface (TUI) output using freeze for screenshot capture and LLM-as-judge for semantic validation. Supports both visual (PNG/SVG) and text-based validation modes.

mikeyobrien / ralph-orchestrator-release-bump exact

Use when bumping ralph-orchestrator version for a new release, after fixes are committed and ready to publish

mikeyobrien / ralph-orchestrator-code-task-generator exact

This sop generates structured code task files from rough descriptions, ideas, or PDD implementation plans. It automatically detects the input type and creates properly formatted code task files...

mikeyobrien / ralph-orchestrator-ralph-memories exact

Use when discovering codebase patterns, making architectural decisions, solving recurring problems, or learning project-specific context that should persist across sessions

mikeyobrien / ralph-orchestrator-evaluate-presets exact

Use when testing Ralph's hat collection presets, validating preset configurations, or auditing the preset library for bugs and UX issues.

Arize-ai / phoenix-phoenix-tracing exact

OpenInference semantic conventions and instrumentation for Phoenix AI observability. Use when implementing LLM tracing, creating custom spans, or deploying to production.

matteoscurati / ai-consultants exact

Consult Gemini CLI, Codex CLI, Mistral Vibe, Kilo CLI, Cursor, Claude, Amp, Qwen, and Ollama as external experts for coding questions. Automatically excludes the invoking agent from the panel to...

RefoundAI / lenny-skills-vibe-coding exact

Help users build software using AI coding tools. Use when someone is using AI to generate code, building prototypes without deep technical skills, or exploring how non-engineers can create...

cass 0.00
Mrc220 / agent-flywheel-clawdbot-skills-and-integrations-cass exact

Coding Agent Session Search - unified CLI/TUI to index and search local coding agent history from Claude Code, Codex, Gemini, Cursor, Aider, ChatGPT, Pi-Agent, Factory, and more. Purpose-built for...

ngxtm / devkit-foundry-sdk-python exact

Build AI applications using the Azure AI Projects Python SDK (azure-ai-projects). Use when working with Foundry project clients, creating versioned agents with PromptAgentDefinition, running...

existential-birds / beagle-pydantic-ai-testing exact

Test PydanticAI agents using TestModel, FunctionModel, VCR cassettes, and inline snapshots. Use when writing unit tests, mocking LLM responses, or recording API interactions.

linxule / interpretive-orchestration-coding-workflow exact

This skill should be used when users are ready to start Stage 2 coding, asks about processing documents systematically, needs to track coding progress, wants to generate audit documentation, or...

existential-birds / beagle-pydantic-ai-common-pitfalls exact

Avoid common mistakes and debug issues in PydanticAI agents. Use when encountering errors, unexpected behavior, or when reviewing agent implementations.

liqiongyu / lenny-skills-plus-content-marketing exact

Build a content marketing program by producing a Content Marketing Plan Pack (content market fit brief, demand-validated SEO topic map, human voice + primary channel strategy, editorial calendar,...

Kalyanikhandare29 / agent-skills-for-context-engineering-evaluation exact

This skill should be used when the user asks to "evaluate agent performance", "build test framework", "measure agent quality", "create evaluation rubrics", or mentions LLM-as-judge,...

Kalyanikhandare29 / agent-skills-for-context-engineering-advanced-evaluation exact

This skill should be used when the user asks to "implement LLM-as-judge", "compare model outputs", "create evaluation rubrics", "mitigate evaluation bias", or mentions direct scoring, pairwise...