2940 results (24.7ms) page 50 / 147
oaustegard / claude-skills-reviewing-ai-papers exact

Analyze AI/ML technical content (papers, articles, blog posts) and extract actionable insights filtered through enterprise AI engineering lens. Use when user provides URL/document for AI/ML...

YuniorGlez / gemini-elite-core-postgres-tuning exact

Senior Database Optimizer for PostgreSQL 17/18+, specialized in Asynchronous I/O (AIO), Query Plan Forensic, and Vector Index optimization.

emzod / speak-turbo exact

Give your agent the ability to speak to you real-time. Talk to your Claude! Ultra-fast TTS, text-to-speech, voice synthesis, audio output with ~90ms latency. 8 built-in voices for instant voice...

actionbook / rust-skills-domain-embedded exact

Use when developing embedded/no_std Rust. Keywords: embedded, no_std, microcontroller, MCU, ARM, RISC-V, bare metal, firmware, HAL, PAC, RTIC, embassy, interrupt, DMA, peripheral, GPIO, SPI, I2C,...

mindrally / skills-micronaut exact

Expert guidance for Micronaut framework development with compile-time dependency injection, GraalVM native builds, and cloud-native microservices

julianobarbosa / claude-code-skills-opentelemetry exact

Implement OpenTelemetry (OTEL) observability - Collector configuration, Kubernetes deployment, traces/metrics/logs pipelines, instrumentation, and troubleshooting. Use when working with OTEL...

avdlee / core-data-agent-skill-core-data-expert exact

Expert Core Data guidance (iOS/macOS): stack setup, fetch requests & NSFetchedResultsController, saving/merge conflicts, threading & Swift Concurrency, batch operations & persistent history,...

olafgeibig / skills-container-use exact

Use this skill when working with Apple Containers (lightweight Linux VMs) as a native Docker replacement on macOS. This includes building container images, running containers, managing container...

johnlindquist / claude-lessons exact

Capture and review lessons learned from coding sessions. Use to record insights, read past lessons, and improve over time.

mindrally / skills-transformers-huggingface exact

Expert guidance for working with Hugging Face Transformers library for NLP, computer vision, and multimodal AI tasks.

mindrally / skills-graalvm exact

Expert guidance for GraalVM native image development with Java frameworks, build optimization, and high-performance application deployment

Charon-Fan / agent-playbook-performance-engineer exact

Performance optimization specialist for improving application speed and efficiency. Use when investigating performance issues or optimizing code.

qmd 0.00
levineam / qmd-skill exact

Local hybrid search for markdown notes and docs. Use when searching notes, finding related content, or retrieving documents from indexed collections.

qmd 0.00
ninehills / skills-qmd exact

Local hybrid search for markdown notes and docs. Use when searching notes, finding related content, or retrieving documents from indexed collections.

tdd 0.00
Shelpuk-AI-Technology-Consulting / agent-skill-tdd-tdd exact

Use for every coding task. Enforce strict TDD workflow: activate Serena, investigate first, clarify+confirm requirements, write REQUIREMENTS.md (As Is/To Be/Requirements+AC/Testing...

Putra213 / claude-workflow-v2-optimizing-performance exact

Analyzes and optimizes application performance across frontend, backend, and database layers. Use when diagnosing slowness, improving load times, optimizing queries, reducing bundle size, or when...

CloudAI-X / claude-workflow-v2-optimizing-performance exact

Analyzes and optimizes application performance across frontend, backend, and database layers. Use when diagnosing slowness, improving load times, optimizing queries, reducing bundle size, or when...

ovachiever / droid-tings-tensorrt-llm exact

Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than...

zechenzhangAGI / ai-research-skills-tensorrt-llm exact

Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than...