christianearle01

test-generator

0
0
# Install this skill:
npx skills add christianearle01/claude-config-template --skill "test-generator"

Install specific skill from multi-skill repository

# Description

Generates tests from features.json test criteria with multi-framework support, coverage analysis, and features.json integration.

# SKILL.md


name: test-generator
description: Generates tests from features.json test criteria with multi-framework support, coverage analysis, and features.json integration.
allowed-tools: Read, Grep, Bash, Write


Test Generator Skill

Purpose: Double trust in AI-generated code by automatically creating tests from features.json test criteria, ensuring comprehensive coverage and preventing the "tests pass but feature incomplete" problem.

When to use:
- User asks to "generate tests", "write tests", "add test coverage"
- Coder Agent completes a feature (verify all criteria tested)
- After creating features.json (generate test scaffolding)
- Coverage analysis needed

Key Insight: "Developers who use AI to test their code double their trust in AI-generated code."

Confidence-based responses: Each test generation includes quality score and coverage estimate.


The Testing Problem in AI Development

Current State

AI code generation without testing:
- 67% of developers have quality concerns about AI code
- "Vibe coding" skips rigorous testing
- Tests written after code often miss edge cases
- Coverage gaps lead to production bugs

The Trust Gap:

Without Testing:
├─ Speed: 3x faster generation
├─ Trust: Low (only 33% confident)
└─ Result: Code sits undeployed or causes issues

With Testing:
├─ Speed: 2.5x faster (slight overhead)
├─ Trust: High (66% confident - doubled)
└─ Result: Code deployed confidently

This skill solves: Automatic test generation from structured requirements (features.json)


Operation 1: Generate Tests from features.json

User Queries:
- "Generate tests for feat-001"
- "Create tests from features.json"
- "Write tests for the login feature"
- "Add test coverage for authentication"

Analysis Steps

  1. Read features.json:
  2. Extract feature by ID or name
  3. Get testCriteria array
  4. Identify tech stack for framework selection

  5. Detect testing framework:

  6. Check package.json for test dependencies
  7. Identify framework (Jest, Vitest, PyTest, Go, etc.)
  8. Select appropriate test patterns

  9. Generate test file:

  10. Create describe block for feature
  11. Generate test case for each criterion
  12. Add appropriate assertions
  13. Include setup/teardown as needed

  14. Update features.json:

  15. Add test file path to feature
  16. Update adoption status

Framework Detection

**Auto-detection by package.json:**

| Dependency | Framework | Pattern |
|------------|-----------|---------|
| jest | Jest | describe/it/expect |
| vitest | Vitest | describe/it/expect |
| @testing-library/react | RTL + Jest/Vitest | render/screen/fireEvent |
| pytest | PyTest | test_*/assert |
| testing | Go testing | TestXxx/t.Run |
| rspec | RSpec | describe/it/expect |
| mocha | Mocha + Chai | describe/it/expect |

**Framework hierarchy:**
1. Explicit in features.json projectInfo.techStack
2. Detected from dependencies
3. Default: Jest (JavaScript), PyTest (Python)

Test Generation Patterns

Pattern 1: React Component Tests (Jest + RTL)

Input (features.json):

{
  "id": "feat-001",
  "name": "User Login",
  "testCriteria": [
    "User can log in with valid credentials",
    "Invalid credentials show error message",
    "Session persists across page refresh",
    "User can log out successfully"
  ]
}

Output (tests/auth/login.test.js):

/**
 * Tests for feat-001: User Login
 * Generated by Test Generator Skill v3.7.0
 *
 * Test Criteria Coverage:
 * - [x] User can log in with valid credentials
 * - [x] Invalid credentials show error message
 * - [x] Session persists across page refresh
 * - [x] User can log out successfully
 */

import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import { LoginForm } from '../../src/components/auth/LoginForm';
import { AuthProvider } from '../../src/context/AuthContext';
import { mockAuthService } from '../mocks/authService';

// Mock the auth service
jest.mock('../../src/services/authService', () => mockAuthService);

describe('feat-001: User Login', () => {
  beforeEach(() => {
    // Clear any stored session
    localStorage.clear();
    sessionStorage.clear();
    jest.clearAllMocks();
  });

  // Test Criterion 1: User can log in with valid credentials
  test('User can log in with valid credentials', async () => {
    // Arrange
    mockAuthService.login.mockResolvedValue({
      user: { id: 1, email: '[email protected]' },
      token: 'valid-token'
    });

    render(
      <AuthProvider>
        <LoginForm />
      </AuthProvider>
    );

    // Act
    fireEvent.change(screen.getByLabelText(/email/i), {
      target: { value: '[email protected]' }
    });
    fireEvent.change(screen.getByLabelText(/password/i), {
      target: { value: 'validPassword123' }
    });
    fireEvent.click(screen.getByRole('button', { name: /log in/i }));

    // Assert
    await waitFor(() => {
      expect(mockAuthService.login).toHaveBeenCalledWith(
        '[email protected]',
        'validPassword123'
      );
    });
    await waitFor(() => {
      expect(screen.getByText(/welcome/i)).toBeInTheDocument();
    });
  });

  // Test Criterion 2: Invalid credentials show error message
  test('Invalid credentials show error message', async () => {
    // Arrange
    mockAuthService.login.mockRejectedValue(
      new Error('Invalid email or password')
    );

    render(
      <AuthProvider>
        <LoginForm />
      </AuthProvider>
    );

    // Act
    fireEvent.change(screen.getByLabelText(/email/i), {
      target: { value: '[email protected]' }
    });
    fireEvent.change(screen.getByLabelText(/password/i), {
      target: { value: 'wrongPassword' }
    });
    fireEvent.click(screen.getByRole('button', { name: /log in/i }));

    // Assert
    await waitFor(() => {
      expect(screen.getByText(/invalid email or password/i)).toBeInTheDocument();
    });
    expect(screen.queryByText(/welcome/i)).not.toBeInTheDocument();
  });

  // Test Criterion 3: Session persists across page refresh
  test('Session persists across page refresh', async () => {
    // Arrange - Set up existing session
    localStorage.setItem('authToken', 'existing-token');
    localStorage.setItem('user', JSON.stringify({ id: 1, email: '[email protected]' }));

    mockAuthService.validateToken.mockResolvedValue({
      valid: true,
      user: { id: 1, email: '[email protected]' }
    });

    // Act - Render (simulating page refresh)
    render(
      <AuthProvider>
        <LoginForm />
      </AuthProvider>
    );

    // Assert - User should still be logged in
    await waitFor(() => {
      expect(screen.getByText(/welcome/i)).toBeInTheDocument();
    });
    expect(mockAuthService.validateToken).toHaveBeenCalledWith('existing-token');
  });

  // Test Criterion 4: User can log out successfully
  test('User can log out successfully', async () => {
    // Arrange - Start logged in
    localStorage.setItem('authToken', 'existing-token');
    mockAuthService.logout.mockResolvedValue({ success: true });

    render(
      <AuthProvider>
        <LoginForm />
      </AuthProvider>
    );

    // Wait for logged-in state
    await waitFor(() => {
      expect(screen.getByText(/welcome/i)).toBeInTheDocument();
    });

    // Act
    fireEvent.click(screen.getByRole('button', { name: /log out/i }));

    // Assert
    await waitFor(() => {
      expect(mockAuthService.logout).toHaveBeenCalled();
    });
    expect(localStorage.getItem('authToken')).toBeNull();
    await waitFor(() => {
      expect(screen.getByLabelText(/email/i)).toBeInTheDocument();
    });
  });
});

Pattern 2: API Endpoint Tests (Jest + Supertest)

Input (features.json):

{
  "id": "feat-002",
  "name": "User Registration",
  "testCriteria": [
    "User can register with email/password",
    "Password confirmation must match",
    "Email validation prevents invalid formats",
    "Successful registration shows confirmation message"
  ]
}

Output (tests/api/registration.test.js):

/**
 * Tests for feat-002: User Registration
 * Generated by Test Generator Skill v3.7.0
 */

const request = require('supertest');
const app = require('../../src/app');
const { db } = require('../../src/db');

describe('feat-002: User Registration API', () => {
  beforeAll(async () => {
    await db.migrate.latest();
  });

  beforeEach(async () => {
    await db('users').truncate();
  });

  afterAll(async () => {
    await db.destroy();
  });

  // Test Criterion 1: User can register with email/password
  test('POST /api/register - User can register with email/password', async () => {
    const response = await request(app)
      .post('/api/register')
      .send({
        email: '[email protected]',
        password: 'SecurePass123!',
        passwordConfirm: 'SecurePass123!'
      });

    expect(response.status).toBe(201);
    expect(response.body).toMatchObject({
      success: true,
      user: {
        email: '[email protected]'
      }
    });
    expect(response.body.user.password).toBeUndefined(); // Password not returned

    // Verify in database
    const user = await db('users').where({ email: '[email protected]' }).first();
    expect(user).toBeDefined();
    expect(user.password_hash).not.toBe('SecurePass123!'); // Hashed
  });

  // Test Criterion 2: Password confirmation must match
  test('POST /api/register - Password confirmation must match', async () => {
    const response = await request(app)
      .post('/api/register')
      .send({
        email: '[email protected]',
        password: 'SecurePass123!',
        passwordConfirm: 'DifferentPass456!'
      });

    expect(response.status).toBe(400);
    expect(response.body).toMatchObject({
      success: false,
      error: expect.stringMatching(/password.*match/i)
    });

    // Verify user not created
    const user = await db('users').where({ email: '[email protected]' }).first();
    expect(user).toBeUndefined();
  });

  // Test Criterion 3: Email validation prevents invalid formats
  test.each([
    ['invalid-email', 'missing @ symbol'],
    ['@nodomain.com', 'missing local part'],
    ['spaces [email protected]', 'contains spaces'],
    ['', 'empty string'],
  ])('POST /api/register - rejects invalid email: %s (%s)', async (invalidEmail) => {
    const response = await request(app)
      .post('/api/register')
      .send({
        email: invalidEmail,
        password: 'SecurePass123!',
        passwordConfirm: 'SecurePass123!'
      });

    expect(response.status).toBe(400);
    expect(response.body.error).toMatch(/email/i);
  });

  // Test Criterion 4: Successful registration shows confirmation message
  test('POST /api/register - Successful registration returns confirmation', async () => {
    const response = await request(app)
      .post('/api/register')
      .send({
        email: '[email protected]',
        password: 'SecurePass123!',
        passwordConfirm: 'SecurePass123!'
      });

    expect(response.status).toBe(201);
    expect(response.body.message).toMatch(/registration successful/i);
    expect(response.body.message).toContain('[email protected]');
  });
});

Pattern 3: Python Tests (PyTest)

Input (features.json):

{
  "id": "feat-007",
  "name": "Shopping Cart",
  "testCriteria": [
    "User can add items to cart",
    "User can remove items from cart",
    "Cart total updates correctly",
    "Cart persists across sessions"
  ]
}

Output (tests/test_shopping_cart.py):

"""
Tests for feat-007: Shopping Cart
Generated by Test Generator Skill v3.7.0

Test Criteria Coverage:
- [x] User can add items to cart
- [x] User can remove items from cart
- [x] Cart total updates correctly
- [x] Cart persists across sessions
"""

import pytest
from decimal import Decimal
from app.models import Cart, CartItem, Product, User
from app.services.cart_service import CartService


@pytest.fixture
def user(db_session):
    """Create a test user."""
    user = User(email="[email protected]")
    db_session.add(user)
    db_session.commit()
    return user


@pytest.fixture
def products(db_session):
    """Create test products."""
    products = [
        Product(id=1, name="Widget", price=Decimal("9.99")),
        Product(id=2, name="Gadget", price=Decimal("19.99")),
        Product(id=3, name="Doohickey", price=Decimal("29.99")),
    ]
    db_session.add_all(products)
    db_session.commit()
    return products


@pytest.fixture
def cart_service(db_session):
    """Create cart service instance."""
    return CartService(db_session)


class TestShoppingCart:
    """feat-007: Shopping Cart tests."""

    # Test Criterion 1: User can add items to cart
    def test_user_can_add_items_to_cart(self, user, products, cart_service):
        """User can add items to cart."""
        # Arrange
        product = products[0]  # Widget @ $9.99

        # Act
        cart = cart_service.add_item(user.id, product.id, quantity=2)

        # Assert
        assert len(cart.items) == 1
        assert cart.items[0].product_id == product.id
        assert cart.items[0].quantity == 2

    def test_adding_same_item_increases_quantity(self, user, products, cart_service):
        """Adding same item increases quantity instead of duplicating."""
        # Arrange
        product = products[0]
        cart_service.add_item(user.id, product.id, quantity=1)

        # Act
        cart = cart_service.add_item(user.id, product.id, quantity=2)

        # Assert
        assert len(cart.items) == 1
        assert cart.items[0].quantity == 3  # 1 + 2

    # Test Criterion 2: User can remove items from cart
    def test_user_can_remove_items_from_cart(self, user, products, cart_service):
        """User can remove items from cart."""
        # Arrange
        cart_service.add_item(user.id, products[0].id, quantity=2)
        cart_service.add_item(user.id, products[1].id, quantity=1)

        # Act
        cart = cart_service.remove_item(user.id, products[0].id)

        # Assert
        assert len(cart.items) == 1
        assert cart.items[0].product_id == products[1].id

    def test_remove_nonexistent_item_raises_error(self, user, products, cart_service):
        """Removing item not in cart raises error."""
        # Arrange - Empty cart

        # Act & Assert
        with pytest.raises(ValueError, match="Item not in cart"):
            cart_service.remove_item(user.id, products[0].id)

    # Test Criterion 3: Cart total updates correctly
    def test_cart_total_updates_correctly(self, user, products, cart_service):
        """Cart total updates correctly when items added/removed."""
        # Arrange & Act
        cart_service.add_item(user.id, products[0].id, quantity=2)  # 2 * $9.99
        cart = cart_service.add_item(user.id, products[1].id, quantity=1)  # 1 * $19.99

        # Assert
        expected_total = Decimal("9.99") * 2 + Decimal("19.99")  # $39.97
        assert cart.total == expected_total

    def test_cart_total_after_removal(self, user, products, cart_service):
        """Cart total updates after item removal."""
        # Arrange
        cart_service.add_item(user.id, products[0].id, quantity=2)  # $19.98
        cart_service.add_item(user.id, products[1].id, quantity=1)  # $19.99

        # Act
        cart = cart_service.remove_item(user.id, products[0].id)

        # Assert
        assert cart.total == Decimal("19.99")

    def test_empty_cart_has_zero_total(self, user, cart_service):
        """Empty cart has zero total."""
        # Act
        cart = cart_service.get_cart(user.id)

        # Assert
        assert cart.total == Decimal("0.00")

    # Test Criterion 4: Cart persists across sessions
    def test_cart_persists_across_sessions(self, user, products, cart_service, db_session):
        """Cart persists across sessions."""
        # Arrange - Add items and "end session"
        cart_service.add_item(user.id, products[0].id, quantity=2)
        cart_service.add_item(user.id, products[1].id, quantity=1)
        db_session.commit()

        # Act - Create new service instance (simulating new session)
        new_cart_service = CartService(db_session)
        cart = new_cart_service.get_cart(user.id)

        # Assert
        assert len(cart.items) == 2
        assert cart.total == Decimal("39.97")

    def test_cart_linked_to_user(self, user, products, cart_service, db_session):
        """Cart is linked to specific user."""
        # Arrange
        other_user = User(email="[email protected]")
        db_session.add(other_user)
        db_session.commit()

        cart_service.add_item(user.id, products[0].id, quantity=2)

        # Act
        other_cart = cart_service.get_cart(other_user.id)

        # Assert
        assert len(other_cart.items) == 0  # Other user's cart is empty

Pattern 4: Go Tests

Input (features.json):

{
  "id": "feat-010",
  "name": "Order Processing",
  "testCriteria": [
    "Order can be created from cart",
    "Order calculates tax correctly",
    "Order status transitions are valid",
    "Order notifications are sent"
  ]
}

Output (internal/orders/order_test.go):

// Tests for feat-010: Order Processing
// Generated by Test Generator Skill v3.7.0

package orders

import (
    "context"
    "testing"

    "github.com/stretchr/testify/assert"
    "github.com/stretchr/testify/mock"
    "github.com/stretchr/testify/require"

    "myapp/internal/cart"
    "myapp/internal/notifications"
)

// Mock notification service
type MockNotifier struct {
    mock.Mock
}

func (m *MockNotifier) SendOrderConfirmation(ctx context.Context, order *Order) error {
    args := m.Called(ctx, order)
    return args.Error(0)
}

func TestOrderProcessing(t *testing.T) {
    // Test Criterion 1: Order can be created from cart
    t.Run("Order can be created from cart", func(t *testing.T) {
        // Arrange
        testCart := &cart.Cart{
            UserID: 1,
            Items: []cart.Item{
                {ProductID: 1, Name: "Widget", Price: 999, Quantity: 2},
                {ProductID: 2, Name: "Gadget", Price: 1999, Quantity: 1},
            },
        }
        service := NewOrderService(nil, nil)

        // Act
        order, err := service.CreateFromCart(context.Background(), testCart)

        // Assert
        require.NoError(t, err)
        assert.Equal(t, int64(1), order.UserID)
        assert.Len(t, order.Items, 2)
        assert.Equal(t, StatusPending, order.Status)
        assert.Equal(t, int64(3997), order.Subtotal) // 2*999 + 1999
    })

    t.Run("Empty cart returns error", func(t *testing.T) {
        // Arrange
        emptyCart := &cart.Cart{UserID: 1, Items: []cart.Item{}}
        service := NewOrderService(nil, nil)

        // Act
        order, err := service.CreateFromCart(context.Background(), emptyCart)

        // Assert
        assert.Nil(t, order)
        assert.ErrorIs(t, err, ErrEmptyCart)
    })

    // Test Criterion 2: Order calculates tax correctly
    t.Run("Order calculates tax correctly", func(t *testing.T) {
        testCases := []struct {
            name     string
            subtotal int64
            taxRate  float64
            expected int64
        }{
            {"Standard rate", 10000, 0.08, 800},      // $100 * 8% = $8
            {"Zero tax", 10000, 0.00, 0},             // $100 * 0% = $0
            {"High rate", 10000, 0.10, 1000},         // $100 * 10% = $10
            {"Rounds correctly", 999, 0.08, 80},      // $9.99 * 8% = $0.80 (rounded)
        }

        for _, tc := range testCases {
            t.Run(tc.name, func(t *testing.T) {
                // Arrange
                order := &Order{Subtotal: tc.subtotal}
                service := NewOrderService(nil, nil)

                // Act
                tax := service.CalculateTax(order, tc.taxRate)

                // Assert
                assert.Equal(t, tc.expected, tax)
            })
        }
    })

    // Test Criterion 3: Order status transitions are valid
    t.Run("Order status transitions are valid", func(t *testing.T) {
        validTransitions := []struct {
            from     Status
            to       Status
            expected bool
        }{
            {StatusPending, StatusConfirmed, true},
            {StatusConfirmed, StatusShipped, true},
            {StatusShipped, StatusDelivered, true},
            {StatusPending, StatusCancelled, true},
            {StatusConfirmed, StatusCancelled, true},
            // Invalid transitions
            {StatusDelivered, StatusPending, false},
            {StatusCancelled, StatusConfirmed, false},
            {StatusShipped, StatusPending, false},
        }

        for _, tc := range validTransitions {
            t.Run(string(tc.from)+"->"+string(tc.to), func(t *testing.T) {
                // Arrange
                order := &Order{Status: tc.from}
                service := NewOrderService(nil, nil)

                // Act
                err := service.TransitionStatus(context.Background(), order, tc.to)

                // Assert
                if tc.expected {
                    assert.NoError(t, err)
                    assert.Equal(t, tc.to, order.Status)
                } else {
                    assert.ErrorIs(t, err, ErrInvalidTransition)
                    assert.Equal(t, tc.from, order.Status) // Unchanged
                }
            })
        }
    })

    // Test Criterion 4: Order notifications are sent
    t.Run("Order notifications are sent", func(t *testing.T) {
        // Arrange
        mockNotifier := new(MockNotifier)
        mockNotifier.On("SendOrderConfirmation", mock.Anything, mock.Anything).Return(nil)

        service := NewOrderService(nil, mockNotifier)
        order := &Order{ID: 123, UserID: 1, Status: StatusConfirmed}

        // Act
        err := service.SendConfirmation(context.Background(), order)

        // Assert
        assert.NoError(t, err)
        mockNotifier.AssertCalled(t, "SendOrderConfirmation", mock.Anything, order)
    })

    t.Run("Notification failure is handled", func(t *testing.T) {
        // Arrange
        mockNotifier := new(MockNotifier)
        mockNotifier.On("SendOrderConfirmation", mock.Anything, mock.Anything).
            Return(notifications.ErrDeliveryFailed)

        service := NewOrderService(nil, mockNotifier)
        order := &Order{ID: 123, Status: StatusConfirmed}

        // Act
        err := service.SendConfirmation(context.Background(), order)

        // Assert
        assert.ErrorIs(t, err, notifications.ErrDeliveryFailed)
    })
}

Operation 2: Coverage Analysis

User Queries:
- "What's my test coverage?"
- "Which features are missing tests?"
- "Analyze test coverage for features.json"
- "Are all test criteria covered?"

Analysis Steps

  1. Read features.json:
  2. List all features and their testCriteria
  3. Note expected test count per feature

  4. Scan test files:

  5. Find test files (.test.js, _test.py, *_test.go)
  6. Extract test names/descriptions
  7. Match to features.json criteria

  8. Calculate coverage:

  9. Criteria with matching tests
  10. Criteria without tests
  11. Overall coverage percentage

Response Template

## Test Coverage Analysis

**Project:** my-ecommerce-app
**Analysis Date:** 2025-12-15
**Overall Coverage:** 78% (28/36 criteria tested)

---

### Coverage by Feature

| Feature | Criteria | Tested | Coverage | Status |
|---------|----------|--------|----------|--------|
| feat-001: User Login | 4 | 4 | 100% | Complete |
| feat-002: User Registration | 4 | 4 | 100% | Complete |
| feat-003: Password Reset | 3 | 2 | 67% | Needs work |
| feat-007: Shopping Cart | 4 | 4 | 100% | Complete |
| feat-008: Checkout | 5 | 3 | 60% | Needs work |
| feat-010: Order Processing | 4 | 2 | 50% | Needs work |
| **Total** | **36** | **28** | **78%** | - |

---

### Missing Test Coverage

#### feat-003: Password Reset (67% coverage)

**Missing tests:**
1. **"Password reset email is sent within 5 minutes"**
   - No test found matching this criterion
   - Suggested test file: `tests/auth/password-reset.test.js`

**Existing tests:**
- [x] User can request password reset
- [x] Reset link expires after 24 hours
- [ ] Password reset email is sent within 5 minutes

#### feat-008: Checkout (60% coverage)

**Missing tests:**
1. **"Shipping options are displayed correctly"**
2. **"Tax is calculated based on shipping address"**

**Existing tests:**
- [x] User can enter payment information
- [x] Order confirmation is shown
- [x] Cart is cleared after checkout
- [ ] Shipping options are displayed correctly
- [ ] Tax is calculated based on shipping address

---

### Recommendations

**Priority 1 (Critical path):**
1. Add tests for feat-008 (Checkout) - Core functionality
2. Add tests for feat-003 (Password Reset) - Security feature

**Priority 2 (Quick wins):**
- feat-010 needs 2 tests to reach 100%

**Estimated time to 100% coverage:**
- 8 tests needed × ~15 min/test = ~2 hours

---

### Coverage Trend

Week 1: ████████░░ 80%
Week 2: ███████░░░ 72% (new features added)
Week 3: ████████░░ 78% (tests caught up)
Target: ██████████ 100%

**Note:** Coverage dropped in Week 2 when new features were added faster than tests.
Recommendation: Write tests alongside features (TDD) to maintain coverage.

Operation 3: Test Validation

User Queries:
- "Are my tests good quality?"
- "Check test quality"
- "Validate tests for feat-001"
- "Are there any brittle tests?"

Quality Checks

  1. Assertion coverage:
  2. Each test has meaningful assertions
  3. Not just "it doesn't throw"

  4. Edge cases:

  5. Empty inputs tested
  6. Error conditions tested
  7. Boundary values tested

  8. Independence:

  9. Tests don't depend on execution order
  10. Proper setup/teardown
  11. No shared mutable state

  12. Brittleness indicators:

  13. Tests tied to implementation details
  14. Timing-dependent assertions
  15. Environment-specific tests

Response Template

## Test Quality Analysis

**Feature:** feat-001: User Login
**Test File:** tests/auth/login.test.js
**Quality Score:** 85/100 (Good)

---

### Quality Breakdown

| Category | Score | Issues |
|----------|-------|--------|
| Assertions | 95/100 | All tests have meaningful assertions |
| Edge Cases | 80/100 | Missing: empty password test |
| Independence | 90/100 | Good isolation |
| Brittleness | 75/100 | 1 timing-dependent test |

---

### Issues Found

#### 1. Missing Edge Case
**Severity:** Medium
**Test:** "Invalid credentials show error message"

**Current:** Only tests wrong password
**Missing:** Empty password, empty email, null values

**Suggested additions:**
```javascript
test.each([
  ['', 'password', 'Email is required'],
  ['[email protected]', '', 'Password is required'],
  [null, 'password', 'Email is required'],
])('shows error for invalid input: %s', async (email, password, expectedError) => {
  // ...
});

2. Brittle Test

Severity: Low
Test: "Session persists across page refresh"

Issue: Uses setTimeout with fixed delay

// Brittle
await new Promise(r => setTimeout(r, 100));
expect(screen.getByText(/welcome/i)).toBeInTheDocument();

Improvement: Use waitFor instead

// Robust
await waitFor(() => {
  expect(screen.getByText(/welcome/i)).toBeInTheDocument();
});

Recommendations

  1. Add edge case tests for empty/null inputs
  2. Replace setTimeout with proper async waits
  3. Consider adding integration test for full login flow

Estimated improvement time: 30 minutes

---

## Operation 4: Update features.json After Testing

**User Queries:**
- "Update features.json with test results"
- "Mark feat-001 tests as complete"
- "Sync test status to features.json"

### Update Process

1. **Verify tests exist and pass:**
   - Run test suite
   - Count passing tests per feature

2. **Update features.json:**
   - Add test file paths
   - Update adoption.percentageComplete
   - Update adoption.notes

### Response Template

```markdown
## features.json Update

**Feature:** feat-001: User Login
**Test File:** tests/auth/login.test.js

---

### Changes Made

```json
{
  "id": "feat-001",
  "name": "User Login",
  "testCriteria": [
    "User can log in with valid credentials",
    "Invalid credentials show error message",
    "Session persists across page refresh",
    "User can log out successfully"
  ],
  "status": "completed",  // Changed from "in-progress"
  "adoption": {
    "percentageComplete": 100,  // Changed from 75
    "lastUpdated": "2025-12-15T14:30:00Z",
    "notes": "All 4 test criteria passing. Test file: tests/auth/login.test.js",
    "testFile": "tests/auth/login.test.js"  // Added
  }
}

Test Results Summary

Criterion Test Status
User can log in with valid credentials test('User can log in...') Passing
Invalid credentials show error message test('Invalid credentials...') Passing
Session persists across page refresh test('Session persists...') Passing
User can log out successfully test('User can log out...') Passing

All criteria covered. Feature marked as complete.

---

## Framework-Specific Examples

### Jest (JavaScript/TypeScript)

```javascript
// Pattern: Describe/It with expect matchers
describe('Feature Name', () => {
  beforeEach(() => { /* setup */ });
  afterEach(() => { /* cleanup */ });

  it('should do something', () => {
    expect(result).toBe(expected);
    expect(array).toContain(item);
    expect(fn).toThrow(Error);
  });
});

Vitest (Vue/Vite)

// Same API as Jest, but faster
import { describe, it, expect, beforeEach } from 'vitest';

describe('Feature Name', () => {
  it('should do something', () => {
    expect(result).toBe(expected);
  });
});

PyTest (Python)

# Pattern: Functions with assert
import pytest

@pytest.fixture
def setup_data():
    return {"key": "value"}

def test_feature_behavior(setup_data):
    assert result == expected

def test_raises_error():
    with pytest.raises(ValueError):
        function_that_raises()

Go testing

// Pattern: TestXxx functions with t.Run subtests
func TestFeatureName(t *testing.T) {
    t.Run("should do something", func(t *testing.T) {
        if result != expected {
            t.Errorf("got %v, want %v", result, expected)
        }
    })
}

RSpec (Ruby)

# Pattern: Describe/It with expect matchers
RSpec.describe FeatureName do
  before { setup_code }

  it 'should do something' do
    expect(result).to eq(expected)
    expect { action }.to raise_error(SomeError)
  end
end

Edge Cases & Troubleshooting

Edge Case 1: No Testing Framework Detected

Scenario: Project has no test dependencies

Response:

**No Testing Framework Detected**

**Project:** my-app
**Package Manager:** npm

**Issue:** No testing framework found in package.json

**Recommendations:**

**For React/Vue/JavaScript:**
```bash
npm install --save-dev jest @testing-library/react @testing-library/jest-dom

For Node.js API:

npm install --save-dev jest supertest

Configuration needed:
1. Add to package.json:
json "scripts": { "test": "jest" }
2. Create jest.config.js (optional)
3. Create tests/ directory

After setup, re-run test generation.

### Edge Case 2: Test Criteria Too Vague

**Scenario:** Feature has vague test criteria

**Response:**
```markdown
**Vague Test Criteria Detected**

**Feature:** feat-005: User Dashboard
**Criterion:** "Dashboard works correctly"

**Issue:** Criterion is too vague to generate meaningful tests.

**Suggested improvements:**
- "Dashboard displays user's recent orders (last 5)"
- "Dashboard shows account balance"
- "Dashboard loads within 2 seconds"
- "Dashboard handles empty state (new user)"

**Why this matters:**
- Vague criteria → vague tests → false confidence
- Specific criteria → specific tests → real coverage

**Recommendation:** Update features.json with specific, testable criteria.

Edge Case 3: Existing Tests Don't Match Criteria

Scenario: Tests exist but don't map to features.json

Response:

**Test-Criteria Mismatch**

**Feature:** feat-007: Shopping Cart
**Test File:** tests/cart.test.js

**Issue:** Existing tests don't match features.json criteria

**Existing tests:**
- test('cart renders correctly')
- test('cart state updates')
- test('cart handles errors')

**features.json criteria:**
- "User can add items to cart"
- "User can remove items from cart"
- "Cart total updates correctly"
- "Cart persists across sessions"

**Gap Analysis:**
- 0/4 criteria have matching tests
- Existing tests are implementation-focused, not behavior-focused

**Options:**
1. **Rename tests** to match criteria (if behavior is same)
2. **Add new tests** for missing criteria (recommended)
3. **Update features.json** to match existing tests

**Recommendation:** Add behavior-focused tests alongside existing tests.

Integration with Other Skills

Integration with Coder Agent

Coder Agent: "Completed feat-007 implementation. Running test validation..."

Test Generator activated:
- Reading features.json for feat-007
- Checking for existing tests

Results:
- 4 test criteria defined
- 2 tests found
- 2 tests missing

Test Generator: "Would you like me to generate the 2 missing tests?
1. 'Cart total updates correctly'
2. 'Cart persists across sessions'"

User: "Yes"

Test Generator: "Generated tests/cart/cart.test.js with 4 tests.
All tests passing. Updating features.json..."

Integration with Security Scanner

Test Generator: "Generated security-focused tests based on Security Scanner findings:"

**SQL Injection Test:**
```javascript
test('prevents SQL injection in search', async () => {
  const maliciousInput = "'; DROP TABLE products; --";
  const response = await request(app)
    .get('/api/search')
    .query({ q: maliciousInput });

  expect(response.status).toBe(200);
  expect(db.query).toHaveBeenCalledWith(
    expect.stringContaining('?'),
    expect.arrayContaining([maliciousInput])
  );
});

XSS Prevention Test:

test('escapes user input in display', () => {
  const maliciousInput = '<script>alert("xss")</script>';
  render(<ProductName name={maliciousInput} />);

  expect(screen.getByText(maliciousInput)).toBeInTheDocument();
  expect(document.querySelector('script')).toBeNull();
});
---

## Skill Metadata

**Token Efficiency:**
- Test generation: ~600 tokens per feature (vs ~2,000 manual)
- Coverage analysis: ~400 tokens (vs ~1,500 manual)
- Test validation: ~300 tokens (vs ~1,000 manual)
- **Average savings: 60-70%**

**Use Cases:**
1. **New feature:** Generate tests from features.json criteria
2. **Coverage check:** Ensure all criteria have tests
3. **Quality audit:** Validate existing test quality
4. **TDD workflow:** Generate test scaffolding before implementation

**Complementary Skills:**
- **Security Scanner:** Generate security-focused tests
- **Standards Enforcer:** Apply test coding standards
- **Quality Orchestrator:** Include in quality gates

**Confidence Philosophy:**
- Test matches criterion exactly → 0.95+ confidence
- Test partially covers criterion → 0.70-0.94 confidence
- Test name similar but different → 0.40-0.69 confidence
- No matching test found → 0.0 confidence

---

## Quick Reference

### Common Commands

**Generate tests for a feature:**

"Generate tests for feat-001"

**Check coverage:**

"What's my test coverage for features.json?"

**Validate test quality:**

"Check test quality for login tests"

**Update features.json:**

"Update features.json with test results"
```

Test Criteria Best Practices

Good Criteria Bad Criteria
User can log in with valid credentials Login works
Invalid credentials show error message Handle errors
Cart total updates when items added Cart updates
Order status changes to 'shipped' Process order

Key: Specific, observable, testable

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.