7162 results (81.0ms) page 18 / 359
mikeyobrien / ralph-orchestrator-evaluate-presets exact

Use when testing Ralph's hat collection presets, validating preset configurations, or auditing the preset library for bugs and UX issues.

mikeyobrien / ralph-orchestrator-tui-validate exact

Validates Terminal User Interface (TUI) output using freeze for screenshot capture and LLM-as-judge for semantic validation. Supports both visual (PNG/SVG) and text-based validation modes.

mikeyobrien / ralph-orchestrator-find-code-tasks exact

Lists all code tasks in the repository with their status, dates, and metadata. Useful for getting an overview of pending work or finding specific tasks.

mikeyobrien / ralph-orchestrator-code-assist exact

This sop guides the implementation of code tasks using test-driven development principles, following a structured Explore, Plan, Code, Commit workflow. It balances automation with user...

mikeyobrien / ralph-orchestrator-create-hat-collection exact

Generates new Ralph hat collection presets through guided conversation. Asks clarifying questions, validates against schema constraints, and outputs production-ready YAML files.

Arize-ai / phoenix-phoenix-tracing exact

OpenInference semantic conventions and instrumentation for Phoenix AI observability. Use when implementing LLM tracing, creating custom spans, or deploying to production.

RefoundAI / lenny-skills-vibe-coding exact

Help users build software using AI coding tools. Use when someone is using AI to generate code, building prototypes without deep technical skills, or exploring how non-engineers can create...

ngxtm / devkit-foundry-sdk-python exact

Build AI applications using the Azure AI Projects Python SDK (azure-ai-projects). Use when working with Foundry project clients, creating versioned agents with PromptAgentDefinition, running...

existential-birds / beagle-pydantic-ai-testing exact

Test PydanticAI agents using TestModel, FunctionModel, VCR cassettes, and inline snapshots. Use when writing unit tests, mocking LLM responses, or recording API interactions.

existential-birds / beagle-pydantic-ai-common-pitfalls exact

Avoid common mistakes and debug issues in PydanticAI agents. Use when encountering errors, unexpected behavior, or when reviewing agent implementations.

Arize-ai / phoenix-phoenix-cli exact

Debug LLM applications using the Phoenix CLI. Fetch traces, analyze errors, review experiments, and inspect datasets. Use when debugging AI/LLM applications, analyzing trace data, working with...

shishiv / gsd-ai-engineer exact

Build production-ready LLM applications, advanced RAG systems, and

rmyndharis / antigravity-skills-ai-engineer exact

Build production-ready LLM applications, advanced RAG systems, and

halay08 / fullstack-agent-skills-ai-engineer exact

Build production-ready LLM applications, advanced RAG systems, and

404kidwiz / agent-skills-backup-ai-engineer exact

Build production-ready LLM applications, advanced RAG systems, and

Mrc220 / agent-flywheel-clawdbot-skills-and-integrations-de-slopify exact

Remove telltale signs of AI-generated 'slop' writing from README files and documentation. Make your docs sound authentically human.

Kalyanikhandare29 / agent-skills-for-context-engineering-context-degradation exact

This skill should be used when the user asks to "diagnose context problems", "fix lost-in-middle issues", "debug agent failures", "understand context poisoning", or mentions context degradation,...

ru 0.00
Mrc220 / agent-flywheel-clawdbot-skills-and-integrations-ru exact

Repo Updater - Multi-repo synchronization with AI-assisted review orchestration. Parallel sync, agent-sweep for dirty repos, ntm integration, git plumbing. 17K LOC Bash CLI.

dspy 0.00
zechenzhangAGI / ai-research-skills-dspy exact

Build complex AI systems with declarative programming, optimize prompts automatically, create modular RAG systems and agents with DSPy - Stanford NLP's framework for systematic LM programming

zechenzhangAGI / ai-research-skills-langsmith-observability exact

LLM observability platform for tracing, evaluation, and monitoring. Use when debugging LLM applications, evaluating model outputs against datasets, monitoring production systems, or building...