Expert iOS App Store submission and approval system. 9 specialized agents providing senior App Review Team-level expertise across compliance, design, privacy, monetization, metadata, technical...
Open-source AI observability platform for LLM tracing, evaluation, and monitoring. Use when debugging LLM applications with detailed traces, running evaluations on datasets, or monitoring...
create an initial PyTM-based threat model of your system when asked to perform a threat model
Compress and simplify prompts to preserve meaning while reducing use of context
Privacy-first social features for recovery apps - sponsors, groups, messaging, friend connections. Use for sponsor/sponsee systems, meeting-based groups, peer support, safe messaging. Activate on...
Manage protocol/standard specifications that define what a system must do. Use to create, import, or update the contract that TASKs implement against.
Use when building RAG systems, vector databases, or knowledge-grounded AI applications requiring semantic search, document retrieval, or context augmentation.
Use when building RAG systems, vector databases, or knowledge-grounded AI applications requiring semantic search, document retrieval, or context augmentation.
Build type-safe LLM applications with DSPy.rb - Ruby's programmatic prompt framework with signatures, modules, agents, and optimization
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
Build evaluation frameworks for agent systems. Use when testing agent performance, validating context engineering choices, or measuring improvements over time.
Build evaluation frameworks for agent systems. Use when testing agent performance, validating context engineering choices, or measuring improvements over time.
Build evaluation frameworks for agent systems. Use when testing agent performance, validating context engineering choices, or measuring improvements over time.
Create and configure Claude Code sub-agents with custom prompts, tools, and models
Create and configure Claude Code sub-agents with custom prompts, tools, and models
Interact with running Clojure REPLs via nREPL. Evaluate Clojure code, query namespaces, test functions, and debug live Clojure systems. Use when working with .clj/.cljc/.cljs files and a REPL is available.
Windows 11 system management