Use when working with error debugging multi agent review
Expert in LangGraph - the production-grade framework for building stateful, multi-actor AI applications. Covers graph construction, state management, cycles and branches, persistence with...
Debug LangChain and LangGraph agents by fetching execution traces from LangSmith Studio. Use when debugging agent behavior, investigating errors, analyzing tool calls, checking memory operations,...
Generate an optimized AGENTS.md file after analyzing the directory/codebase for context.
Build AI agents with Google ADK Python (Agent Development Kit). Use for multi-agent systems, workflow agents (sequential/parallel/loop), Vertex AI deployment, tool integration, human-in-the-loop.
Find awesome Agent Skills
Build Azure AI Foundry agents using the Microsoft Agent Framework Python SDK (agent-framework-azure-ai). Use when creating persistent agents with AzureAIAgentsProvider, using hosted tools (code...
Converts a Refound/Lenny Skill into a high-density, agent-executable Skill Pack (Agent Skills standard). Output must be in English.
Use when working with performance testing review multi agent review
Use when working with performance testing review multi agent review
Use when working with performance testing review multi agent review
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
This skill should be used when the user asks to "evaluate agent performance", "build test framework", "measure agent quality", "create evaluation rubrics", or mentions LLM-as-judge,...
Agentic orchestration patterns for long-running tasks. Implements evidence-based delivery and Simon Willison's agent loop. Use when managing multi-step work, coordinating subagents, or...
Verification discipline for completion claims. Use when about to assert success, claim a fix is complete, report tests passing, or before commits and PRs. Enforces evidence-first workflow.
Use when creating new skills, editing existing skills, or verifying skills work before deployment
Production-ready patterns for building LLM applications. Covers RAG pipelines, agent architectures, prompt IDEs, and LLMOps monitoring. Use when designing AI applications, implementing RAG,...
Production-ready patterns for building LLM applications. Covers RAG pipelines, agent architectures, prompt IDEs, and LLMOps monitoring. Use when designing AI applications, implementing RAG,...