|
Local speech-to-text with the Whisper CLI (no API key).
|
|
|
Answer questions about the AI SDK and help build AI-powered features. Use when developers: (1) Ask about AI SDK functions like generateText, streamText, ToolLoopAgent, embed, or tools, (2) Want to...
Implement and maintain integrations with Nebius Token Factory (tokenfactory.nebius.com) OpenAI-compatible API, including auth + base URL config, chat/completions/embeddings/images, model...
Expert in building voice AI applications - from real-time voice agents to voice-enabled apps. Covers OpenAI Realtime API, Vapi for voice agents, Deepgram for transcription, ElevenLabs for...
Use OpenAI's Codex CLI to invoke GPT models from the command line. Use when you need a second AI opinion, want to verify your work, or need OpenAI-specific capabilities.
Leverage OpenAI Codex/GPT models for autonomous code implementation. Triggers: "codex", "use gpt", "gpt-5", "gpt-5.2", "let openai", "full-auto", "用codex", "让gpt实现".
Generate AI-powered podcast-style audio narratives using Azure OpenAI's GPT Realtime Mini model via WebSocket. Use when building text-to-speech features, audio narrative generation, podcast...
Implement comprehensive safety guardrails for LLM applications including content moderation (OpenAI Moderation API), jailbreak prevention, prompt injection defense, PII detection, topic...
Build AI agents that interact with computers like humans do - viewing screens, moving cursors, clicking buttons, and typing text. Covers Anthropic's Computer Use, OpenAI's Operator/CUA, and...
Capture API response test fixture.
Develop examples for AI SDK functions. Use when creating, running, or modifying examples under examples/ai-functions/src to validate provider support, demonstrate features, or create test fixtures.
OpenAI's conversational AI assistant.
Latest AI models reference - Claude, OpenAI, Gemini, Eleven Labs, Replicate
OpenAI Codex CLI code review with GPT-5.2-Codex, CI/CD integration
Microsoft Teams bots and AI agents - Claude/OpenAI, Adaptive Cards, Graph API
Add LLM tracing and observability to your code. Use when instrumenting functions, integrating frameworks (LangChain, OpenAI, etc.), or adding custom spans.