>
>
>
>
>
Exa AI neural search for docs, code, GitHub repos, papers. Triggers on "搜索/查询/查找/找一下", "有没有...文档/示例", "如何实现", "最新的...". Use instead of WebFetch when needing multiple results or smart filtering.
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
Intelligently routes code reviews between Gemini CLI and Codex CLI based on tech stack, complexity, and change characteristics. Use when you want an automated code review of your current changes.
Use when building distributed apps with Aspire; orchestrating .NET, JavaScript, Python, or polyglot services; when environment variables or service discovery aren't working; when migrating from...
Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy....
Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy....
Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge,...
Deep knowledge expert for the 33GOD agentic pipeline system, understands component relationships and suggests feature implementations based on actual codebase state
Reddit API with PRAW (Python) and Snoowrap (Node.js)
Analyzes, generates, and enhances CLAUDE.md files for any project type using best practices, modular architecture support, and tech stack customization. Use when setting up new projects, improving...
Use when building MCP servers in TypeScript, Python, or C#; when implementing tools, resources, or prompts; when configuring Streamable HTTP transport; when migrating from SSE; when adding OAuth...
Hugging Face Transformers best practices including model loading, tokenization, fine-tuning workflows, and inference optimization. Use when working with transformer models, fine-tuning LLMs,...
Hugging Face Transformers best practices including model loading, tokenization, fine-tuning workflows, and inference optimization. Use when working with transformer models, fine-tuning LLMs,...