Comprehensive code review guidelines for ensuring code quality, security, and maintainability. Use when reviewing pull requests, refactoring code, or ensuring best practices.
Validate startup ideas using Hexa's Opportunity Memo framework and Perceived Created Value (PCV) methodology. Assess problem-solution fit, market opportunity, and determine if an idea is worth pursuing.
Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss process improvements.
Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss process improvements.
Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss process improvements.
Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss process improvements.
Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss process improvements.
Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss process improvements.
Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss process improvements.
Core development standards, patterns, and best practices for React and React Native projects. Use this when users need guidance on code quality (Biome, TypeScript), testing (Jest), project...
Deep-dive data profiling for a specific table. Use when the user asks to profile a table, wants statistics about a dataset, asks about data quality, or needs to understand a table's structure and...
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or...
Quick Mac system health check with battery, RAM, CPU status and actionable recommendations. Use when user mentions "mac health", "mac status", "mac performance", "battery check", "ram usage",...
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or...
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or...
Research and add new strategic frameworks to the system (meta-skill). Use when user wants to add framework not in library; discovered new framework in their domain; asks "Can you add...
Plan and run a high-signal team offsite/retreat and produce an Offsite Pack (offsite brief, agenda + run-of-show, prework, facilitation guide, logistics checklist, post-offsite decisions + action...
Reviews pull requests and code changes for quality, security, and best practices. Use when user asks for code review, PR review, or mentions reviewing changes.
Performs root cause analysis on DAG execution failures. Traces failure propagation, identifies systemic issues, and generates actionable remediation guidance. Activate on 'failure analysis', 'root...
Create and validate new AI agent skills. This skill provides standards, templates, and validation tools to ensure high-quality, interoperable skills across different AI agents.