Performance testing and load testing expertise including k6, locust, JMeter, Gatling, artillery, API load testing, database query optimization, benchmarking strategies, profiling techniques,...
Expert guidance for GRPO/RL fine-tuning with TRL for reasoning and task-specific model training
Expert guidance for GRPO/RL fine-tuning with TRL for reasoning and task-specific model training
Autonomous coding agent that breaks features into small user stories and implements them iteratively with fresh context per iteration. Use when asked to: build a feature autonomously, create a...
Integration patterns for Mapbox Maps SDK on Android with Kotlin, Jetpack Compose, lifecycle management, and mobile optimization best practices.
Build accessible, responsive, and performant frontend components with design system best practices, modern CSS, and framework-agnostic patterns.
Build production-ready Node.js backend services with Express/Fastify, implementing middleware patterns, error handling, authentication, database integration, and API design best practices. Use...
|
Emit a message or event to an external system with policy enforcement and approval gates. Use when publishing messages, calling APIs, sending notifications, or triggering external workflows....
React Query and Zustand patterns for state management. Use when implementing data fetching, caching, mutations, or client-side state. Triggers on tasks involving useQuery, useMutation, Zustand...
|
Comprehensive skill for Sentry error monitoring and performance tracking. Use when Claude needs to (1) Configure Sentry SDKs for error tracking and performance monitoring, (2) Manage releases,...
|
|
Manage team inbox and customer communication with Front's collaborative platform.
Express/Hono with Supabase and Drizzle ORM
|
Configure popular MCP servers for enhanced agent capabilities
Fine-tune LLMs using reinforcement learning with TRL - SFT for instruction tuning, DPO for preference alignment, PPO/GRPO for reward optimization, and reward model training. Use when need RLHF,...
Fine-tune LLMs using reinforcement learning with TRL - SFT for instruction tuning, DPO for preference alignment, PPO/GRPO for reward optimization, and reward model training. Use when need RLHF,...