6437 results (38.4ms) page 14 / 322
zebbern / claude-code-guide-aws-penetration-testing exact

This skill should be used when the user asks to "pentest AWS", "test AWS security", "enumerate IAM", "exploit cloud infrastructure", "AWS privilege escalation", "S3 bucket testing", "metadata...

zebbern / claude-code-guide-wireshark-network-traffic-analysis exact

This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow TCP/UDP streams", "detect network...

Arize-ai / phoenix-phoenix-evals exact

Build and run evaluators for AI/LLM applications using Phoenix.

mikeyobrien / ralph-orchestrator-release-bump exact

Use when bumping ralph-orchestrator version for a new release, after fixes are committed and ready to publish

mikeyobrien / ralph-orchestrator-ralph-memories exact

Use when discovering codebase patterns, making architectural decisions, solving recurring problems, or learning project-specific context that should persist across sessions

pdd 0.00
mikeyobrien / ralph-orchestrator-pdd exact

This sop guides you through the process of transforming a rough idea into a detailed design document with an implementation plan and todo list. It follows the Prompt-Driven Development methodology...

mikeyobrien / ralph-orchestrator-code-assist exact

This sop guides the implementation of code tasks using test-driven development principles, following a structured Explore, Plan, Code, Commit workflow. It balances automation with user...

mikeyobrien / ralph-orchestrator-tui-validate exact

Validates Terminal User Interface (TUI) output using freeze for screenshot capture and LLM-as-judge for semantic validation. Supports both visual (PNG/SVG) and text-based validation modes.

mikeyobrien / ralph-orchestrator-evaluate-presets exact

Use when testing Ralph's hat collection presets, validating preset configurations, or auditing the preset library for bugs and UX issues.

mikeyobrien / ralph-orchestrator-code-task-generator exact

This sop generates structured code task files from rough descriptions, ideas, or PDD implementation plans. It automatically detects the input type and creates properly formatted code task files...

mikeyobrien / ralph-orchestrator-create-hat-collection exact

Generates new Ralph hat collection presets through guided conversation. Asks clarifying questions, validates against schema constraints, and outputs production-ready YAML files.

mikeyobrien / ralph-orchestrator-ralph-tools exact

Use when managing runtime tasks or memories during Ralph orchestration runs

mikeyobrien / ralph-orchestrator-pr-demo exact

Use when creating animated demos (GIFs) for pull requests or documentation. Covers terminal recording with asciinema and conversion to GIF/SVG for GitHub embedding.

mikeyobrien / ralph-orchestrator-find-code-tasks exact

Lists all code tasks in the repository with their status, dates, and metadata. Useful for getting an overview of pending work or finding specific tasks.

Arize-ai / phoenix-phoenix-tracing exact

OpenInference semantic conventions and instrumentation for Phoenix AI observability. Use when implementing LLM tracing, creating custom spans, or deploying to production.

RefoundAI / lenny-skills-vibe-coding exact

Help users build software using AI coding tools. Use when someone is using AI to generate code, building prototypes without deep technical skills, or exploring how non-engineers can create...

ngxtm / devkit-foundry-sdk-python exact

Build AI applications using the Azure AI Projects Python SDK (azure-ai-projects). Use when working with Foundry project clients, creating versioned agents with PromptAgentDefinition, running...

existential-birds / beagle-pydantic-ai-testing exact

Test PydanticAI agents using TestModel, FunctionModel, VCR cassettes, and inline snapshots. Use when writing unit tests, mocking LLM responses, or recording API interactions.

existential-birds / beagle-pydantic-ai-common-pitfalls exact

Avoid common mistakes and debug issues in PydanticAI agents. Use when encountering errors, unexpected behavior, or when reviewing agent implementations.