0xDarkMatter

testgen

6
0
# Install this skill:
npx skills add 0xDarkMatter/claude-mods --skill "testgen"

Install specific skill from multi-skill repository

# Description

Generate tests with expert routing, framework detection, and auto-TaskCreate. Triggers on: generate tests, write tests, testgen, create test file, add test coverage.

# SKILL.md


name: testgen
description: "Generate tests with expert routing, framework detection, and auto-TaskCreate. Triggers on: generate tests, write tests, testgen, create test file, add test coverage."
allowed-tools: "Read Write Edit Bash Glob Grep Task TaskCreate"


TestGen Skill - AI Test Generation

Generate comprehensive tests with automatic framework detection, expert agent routing, and project convention matching.

Architecture

testgen <target> [--type] [--focus] [--depth]
    β”‚
    β”œβ”€β†’ Step 1: Analyze Target
    β”‚     β”œβ”€ File exists? β†’ Read and parse
    β”‚     β”œβ”€ Function specified? β†’ Extract signature
    β”‚     β”œβ”€ Directory? β†’ List source files
    β”‚     └─ Find existing tests (avoid duplicates)
    β”‚
    β”œβ”€β†’ Step 2: Detect Framework (parallel)
    β”‚     β”œβ”€ package.json β†’ jest/vitest/mocha/cypress/playwright
    β”‚     β”œβ”€ pyproject.toml β†’ pytest/unittest
    β”‚     β”œβ”€ go.mod β†’ go test
    β”‚     β”œβ”€ Cargo.toml β†’ cargo test
    β”‚     β”œβ”€ composer.json β†’ phpunit/pest
    β”‚     └─ Check existing test patterns
    β”‚
    β”œβ”€β†’ Step 3: Load Project Standards
    β”‚     β”œβ”€ AGENTS.md, CLAUDE.md conventions
    β”‚     β”œβ”€ Existing test file structure
    β”‚     └─ Naming conventions (*.test.ts vs *.spec.ts)
    β”‚
    β”œβ”€β†’ Step 4: Route to Expert Agent
    β”‚     β”œβ”€ .ts β†’ typescript-expert
    β”‚     β”œβ”€ .tsx/.jsx β†’ react-expert
    β”‚     β”œβ”€ .vue β†’ vue-expert
    β”‚     β”œβ”€ .py β†’ python-expert
    β”‚     β”œβ”€ .go β†’ go-expert
    β”‚     β”œβ”€ .rs β†’ rust-expert
    β”‚     β”œβ”€ .php β†’ laravel-expert
    β”‚     β”œβ”€ E2E/Cypress β†’ cypress-expert
    β”‚     β”œβ”€ Playwright β†’ typescript-expert
    β”‚     β”œβ”€ --visual β†’ Chrome DevTools MCP
    β”‚     └─ Multi-file β†’ parallel expert dispatch
    β”‚
    β”œβ”€β†’ Step 5: Generate Tests
    β”‚     β”œβ”€ Create test file in correct location
    β”‚     β”œβ”€ Follow detected conventions
    β”‚     └─ Include: happy path, edge cases, error handling
    β”‚
    └─→ Step 6: Integration
          β”œβ”€ Auto-create task (TaskCreate) for verification
          └─ Suggest: run tests, /review, /save

Execution Steps

Step 1: Analyze Target

# Check if target exists
test -f "$TARGET" && echo "FILE" || test -d "$TARGET" && echo "DIRECTORY"

# For function-specific: extract signature
command -v ast-grep >/dev/null 2>&1 && ast-grep -p "function $FUNCTION_NAME" "$FILE"

# Fallback to ripgrep
rg "(?:function|const|def|public|private)\s+$FUNCTION_NAME" "$FILE" -A 10

Check for existing tests:

fd -e test.ts -e spec.ts -e test.js -e spec.js | rg "$BASENAME"
fd "test_*.py" | rg "$BASENAME"

Step 2: Detect Framework

JavaScript/TypeScript:

cat package.json 2>/dev/null | jq -r '.devDependencies | keys[]' | grep -E 'jest|vitest|mocha|cypress|playwright|@testing-library'

Python:

grep -E "pytest|unittest|nose" pyproject.toml setup.py requirements*.txt 2>/dev/null

Go:

test -f go.mod && echo "go test available"

Rust:

test -f Cargo.toml && echo "cargo test available"

PHP:

cat composer.json 2>/dev/null | jq -r '.["require-dev"] | keys[]' | grep -E 'phpunit|pest|codeception'

Step 3: Load Project Standards

# Claude Code conventions
cat AGENTS.md 2>/dev/null | head -50
cat CLAUDE.md 2>/dev/null | head -50

# Test config files
cat jest.config.* vitest.config.* pytest.ini pyproject.toml 2>/dev/null | head -30

Test location conventions:

# JavaScript
src/utils/helper.ts β†’ src/utils/__tests__/helper.test.ts  # __tests__ folder
                    β†’ src/utils/helper.test.ts            # co-located
                    β†’ tests/utils/helper.test.ts          # separate tests/

# Python
app/utils/helper.py β†’ tests/test_helper.py               # tests/ folder
                    β†’ tests/utils/test_helper.py         # mirror structure

# Go
pkg/auth/token.go β†’ pkg/auth/token_test.go               # co-located (required)

# Rust
src/auth.rs β†’ src/auth.rs (mod tests { ... })            # inline tests
            β†’ tests/auth_test.rs                          # integration tests

Step 4: Route to Expert Agent

File Pattern Primary Expert Secondary
*.ts typescript-expert -
*.tsx, *.jsx react-expert typescript-expert
*.vue vue-expert typescript-expert
*.py python-expert -
*.go go-expert -
*.rs rust-expert -
*.php laravel-expert -
*.cy.ts, cypress/* cypress-expert -
*.spec.ts (Playwright) typescript-expert -
playwright/*, e2e/* typescript-expert -
*.sh, *.bash bash-expert -
(--visual flag) Chrome DevTools MCP typescript-expert

Invoke via Task tool:

Task tool with subagent_type: "[detected]-expert"
Prompt includes:
  - Source file content
  - Function signatures to test
  - Detected framework and conventions
  - Requested test type and focus

Step 5: Generate Tests

Test categories based on --focus:

Focus What to Generate
happy Normal input, expected output
edge Boundary values, empty inputs, nulls
error Invalid inputs, exceptions, error handling
all All of the above (default)

Depth levels:

Depth Coverage
quick Happy path only, 1-2 tests per function
normal Happy + common edge cases (default)
thorough Comprehensive: all paths, mocking, async

Step 6: Integration

Auto-create task:

TaskCreate:
  subject: "Run generated tests for src/auth.ts"
  description: "Verify generated tests pass and review edge cases"
  activeForm: "Running generated tests for auth.ts"

Suggest next steps:

Tests generated: src/auth.test.ts

Next steps:
1. Run tests: npm test src/auth.test.ts
2. Review and refine edge cases
3. Use /save to persist tasks across sessions

Expert Routing Details

TypeScript/JavaScript β†’ typescript-expert

  • Proper type imports
  • Generic type handling
  • Async/await patterns
  • Mock typing

React/JSX β†’ react-expert

  • React Testing Library patterns
  • Component rendering tests
  • Hook testing (renderHook)
  • Accessibility queries (getByRole)

Vue β†’ vue-expert

  • Vue Test Utils patterns
  • Composition API testing
  • Pinia store mocking

Python β†’ python-expert

  • pytest fixtures
  • Parametrized tests
  • Mock/patch patterns
  • Async test handling

Go β†’ go-expert

  • Table-driven tests ([]struct pattern)
  • testing.T and subtests (t.Run)
  • Testify assertions (when detected)
  • Benchmark functions (testing.B)
  • Parallel tests (t.Parallel())

Rust β†’ rust-expert

  • #[test] attribute functions
  • #[cfg(test)] module organization
  • #[should_panic] for error testing
  • proptest/quickcheck for property testing

PHP/Laravel β†’ laravel-expert

  • PHPUnit/Pest patterns
  • Database transactions
  • Factory usage

E2E β†’ cypress-expert

  • Page object patterns
  • Custom commands
  • Network stubbing

Playwright β†’ typescript-expert

  • Page object model patterns
  • Locator strategies
  • Visual regression testing

CLI Tool Integration

Tool Purpose Fallback
jq Parse package.json Read tool
rg Find existing tests Grep tool
ast-grep Parse function signatures ripgrep patterns
fd Find test files Glob tool
Chrome DevTools MCP Visual testing (--visual) Playwright/Cypress

Graceful degradation:

command -v jq >/dev/null 2>&1 && cat package.json | jq '.devDependencies' || cat package.json

Reference Files

For framework-specific code examples, see:
- frameworks.md - Complete test examples for all supported languages
- visual-testing.md - Chrome DevTools integration for --visual flag


Integration

Command Relationship
/review Review generated tests before committing
/explain Understand complex code before testing
/save Track test coverage goals

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.