NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation, fact-checking, hallucination detection, PII filtering, toxicity detection. Uses...
|
Use the Synth AI API end-to-end (SDK + HTTP) for eval + GEPA
Comprehensive expertise for working with Microsoft's GenAIScript framework - a JavaScript/TypeScript-based system for building automatable LLM prompts and AI workflows. Use when creating,...
Guide model fine-tuning processes for customized AI performance
This skill should be used when the user asks to "optimize with SIMBA", "use Bayesian optimization", "optimize agents with custom feedback", mentions "SIMBA optimizer", "mini-batch optimization",...
Large Language and Vision Assistant. Enables visual instruction tuning and image-based conversations. Combines CLIP vision encoder with Vicuna/LLaMA language models. Supports multi-turn image...
Expert guidance for Microsoft AutoGen multi-agent framework development including agent creation, conversations, tool integration, and orchestration patterns.
This skill should be used when working with AssemblyAI’s Speech-to-Text and LLM Gateway APIs, especially for streaming/live transcription, meeting notetakers, and voice agents that need...
|
|
Aggregates and summarizes the latest AI news from multiple sources including AI news websites and web search. Provides concise news briefs with direct links to original articles. Activates when...
Build chat interfaces for querying documents using natural language. Extract information from PDFs, GitHub repositories, emails, and other sources. Use when creating interactive document Q&A...
Implements agents using Deep Agents. Use when building agents with create_deep_agent, configuring backends, defining subagents, adding middleware, or setting up human-in-the-loop workflows.
|
|
Run Codex CLI, Claude Code, OpenCode, or Pi Coding Agent via background process for programmatic control.
Run Codex CLI, Claude Code, OpenCode, or Pi Coding Agent via background process for programmatic control.
Repository packaging for AI/LLM analysis. Capabilities: pack repos into single files, generate AI-friendly context, codebase snapshots, security audit prep, filter/exclude patterns, token...
Vercel and Next.js deployment best practices including server components, edge functions, AI SDK integration, and performance optimization.