Create representation of current world state for a domain. Use when modeling system state, building world models, capturing entity relationships, or establishing baseline snapshots.
Guide a safe git rebase of the current branch onto a target branch, including conflict triage and resolution steps. Use when asked to rebase, update a branch, or resolve rebase conflicts.
Advanced React Flow patterns for complex use cases. Use when implementing sub-flows, custom connection lines, programmatic layouts, drag-and-drop, undo/redo, or complex state synchronization.
Reviews Deep Agents code for bugs, anti-patterns, and improvements. Use when reviewing code that uses create_deep_agent, backends, subagents, middleware, or human-in-the-loop patterns. Catches...
Convert data between formats, schemas, or representations with explicit loss accounting and validation. Use when reformatting data, mapping between schemas, normalizing inputs, or translating structures.
Watch and report current state of a target system, process, or entity. Use when monitoring status, inspecting live systems, checking current conditions, or observing runtime behavior.
Reviews Prometheus instrumentation in Go code for proper metric types, labels, and patterns. Use when reviewing code with prometheus/client_golang metrics.
Write stable learnings, decisions, and patterns to durable storage like CLAUDE.md or knowledge files. Use when saving project decisions, recording patterns, or updating long-term memory.
Mandatory verification steps for all code reviews to reduce false positives. Load this skill before reporting ANY code review findings.
Reviews SwiftData code for model design, queries, concurrency, and migrations. Use when reviewing .swift files with import SwiftData, @Model, @Query, @ModelActor, or VersionedSchema.
Reviews Swift code for concurrency safety, error handling, memory management, and common mistakes. Use when reviewing .swift files for async/await patterns, actor isolation, Sendable conformance,...
Create or update root and nested AGENTS.md files that document scoped conventions, monorepo module maps, cross-domain workflows, and (optionally) per-module feature maps (feature -> paths,...
Process external code review feedback with technical rigor. Use when receiving feedback from another LLM, human reviewer, or CI tool. Verifies claims before implementing, tracks disposition.
Register and implement PydanticAI tools with proper context handling, type annotations, and docstrings. Use when adding tool capabilities to agents, implementing function calling, or creating...
Reviews Combine framework code for memory leaks, operator misuse, and error handling. Use when reviewing code with import Combine, AnyPublisher, @Published, PassthroughSubject, or CurrentValueSubject.
Create an executable plan with steps, dependencies, verification criteria, checkpoints, and rollback strategies. Use when preparing changes, designing workflows, or structuring multi-step...
Perform 12-Factor App compliance analysis on any codebase. Use when evaluating application architecture, auditing SaaS applications, or reviewing cloud-native applications against the original...
Request clarification when input is ambiguous. Use when user request has missing parameters, conflicting interpretations, or insufficient constraints for reliable execution.
Anchor claims to evidence from authoritative sources. Use when validating assertions, establishing provenance, verifying facts, or ensuring claims are supported by evidence.
Test PydanticAI agents using TestModel, FunctionModel, VCR cassettes, and inline snapshots. Use when writing unit tests, mocking LLM responses, or recording API interactions.