Implement OpenTelemetry (OTEL) observability - Collector configuration, Kubernetes deployment, traces/metrics/logs pipelines, instrumentation, and troubleshooting. Use when working with OTEL...
Expert Core Data guidance (iOS/macOS): stack setup, fetch requests & NSFetchedResultsController, saving/merge conflicts, threading & Swift Concurrency, batch operations & persistent history,...
Use this skill when working with Apple Containers (lightweight Linux VMs) as a native Docker replacement on macOS. This includes building container images, running containers, managing container...
Capture and review lessons learned from coding sessions. Use to record insights, read past lessons, and improve over time.
Expert guidance for working with Hugging Face Transformers library for NLP, computer vision, and multimodal AI tasks.
Expert guidance for GraalVM native image development with Java frameworks, build optimization, and high-performance application deployment
Performance optimization specialist for improving application speed and efficiency. Use when investigating performance issues or optimizing code.
|
Local hybrid search for markdown notes and docs. Use when searching notes, finding related content, or retrieving documents from indexed collections.
Local hybrid search for markdown notes and docs. Use when searching notes, finding related content, or retrieving documents from indexed collections.
Use for every coding task. Enforce strict TDD workflow: activate Serena, investigate first, clarify+confirm requirements, write REQUIREMENTS.md (As Is/To Be/Requirements+AC/Testing...
Analyzes and optimizes application performance across frontend, backend, and database layers. Use when diagnosing slowness, improving load times, optimizing queries, reducing bundle size, or when...
Analyzes and optimizes application performance across frontend, backend, and database layers. Use when diagnosing slowness, improving load times, optimizing queries, reducing bundle size, or when...
Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than...
Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than...
Verify code correctness before claiming done or committing. Run 6-dimension checklist: requirements coverage, concurrency safety, error handling, resource management, boundary conditions, and...
Guides architectural decisions for Deep Agents applications. Use when deciding between Deep Agents vs alternatives, choosing backend strategies, designing subagent systems, or selecting middleware...
Deep agentic-flow@alpha integration implementing ADR-001. Eliminates 10,000+ duplicate lines by building claude-flow as specialized extension rather than parallel implementation.
Deep agentic-flow@alpha integration implementing ADR-001. Eliminates 10,000+ duplicate lines by building claude-flow as specialized extension rather than parallel implementation.
Get snow conditions, forecasts, and ski reports for any mountain resort worldwide. Use when asked about snow, powder, ski conditions, or mountain weather. Supports 1000+ resorts via OpenSnow....