Use when adding new error messages to React, or seeing "unknown error code" warnings.
npx skills add xun404/dify-agent-skill-plugin --skill "testing-helper"
Install specific skill from multi-skill repository
# Description
Generates unit tests, integration tests, and test strategies. Use for test creation, mocking, and coverage improvement.
# SKILL.md
name: testing-helper
description: Generates unit tests, integration tests, and test strategies. Use for test creation, mocking, and coverage improvement.
triggers:
- test
- unit test
- integration test
- mock
- coverage
- pytest
- jest
- junit
- testing
- test case
- assertion
priority: 8
category: testing
Testing Helper Skill
A specialized skill for creating comprehensive and maintainable tests.
Core Capabilities
1. Unit Test Generation
When creating unit tests:
- Test one thing at a time (single responsibility)
- Follow Arrange-Act-Assert (AAA) pattern
- Use descriptive test names that explain intent
- Cover happy path and edge cases
- Test boundary conditions
- Include negative test cases
2. Integration Test Creation
When creating integration tests:
- Test component interactions
- Set up and tear down test environments
- Handle external dependencies appropriately
- Test realistic scenarios
- Consider data consistency
3. Mocking & Stubbing
When creating mocks:
- Mock external dependencies, not the SUT
- Use appropriate mocking library for the language
- Verify mock interactions when relevant
- Keep mocks simple and focused
- Document mock behavior
4. Test Coverage Analysis
When improving coverage:
- Identify untested code paths
- Prioritize critical functionality
- Suggest meaningful tests, not just coverage numbers
- Consider mutation testing for quality assessment
Testing Patterns by Language
Python (pytest)
import pytest
from unittest.mock import Mock, patch
class TestUserService:
"""Tests for UserService class."""
@pytest.fixture
def user_service(self):
"""Create a UserService with mocked dependencies."""
db = Mock()
return UserService(db)
def test_create_user_with_valid_data_returns_user(self, user_service):
"""Test that creating a user with valid data succeeds."""
# Arrange
user_data = {"name": "John", "email": "[email protected]"}
# Act
result = user_service.create_user(user_data)
# Assert
assert result.name == "John"
assert result.email == "[email protected]"
def test_create_user_with_invalid_email_raises_error(self, user_service):
"""Test that invalid email raises ValidationError."""
# Arrange
user_data = {"name": "John", "email": "invalid"}
# Act & Assert
with pytest.raises(ValidationError):
user_service.create_user(user_data)
@pytest.mark.parametrize("email", [
"",
"no-at-sign",
"@no-local.com",
"no-domain@",
])
def test_create_user_rejects_invalid_emails(self, user_service, email):
"""Test various invalid email formats are rejected."""
user_data = {"name": "John", "email": email}
with pytest.raises(ValidationError):
user_service.create_user(user_data)
JavaScript (Jest)
describe('UserService', () => {
let userService;
let mockDb;
beforeEach(() => {
mockDb = {
save: jest.fn(),
find: jest.fn(),
};
userService = new UserService(mockDb);
});
describe('createUser', () => {
it('should create user with valid data', async () => {
// Arrange
const userData = { name: 'John', email: '[email protected]' };
mockDb.save.mockResolvedValue({ id: 1, ...userData });
// Act
const result = await userService.createUser(userData);
// Assert
expect(result.name).toBe('John');
expect(mockDb.save).toHaveBeenCalledWith(userData);
});
it('should throw error for invalid email', async () => {
// Arrange
const userData = { name: 'John', email: 'invalid' };
// Act & Assert
await expect(userService.createUser(userData))
.rejects.toThrow('Invalid email');
});
});
});
Test Naming Conventions
Use one of these patterns for test names:
- Should/When Pattern
should_return_user_when_valid_id_provided-
should_throw_error_when_email_is_invalid -
Given/When/Then Pattern
given_valid_user_when_save_then_returns_id-
given_duplicate_email_when_create_then_throws -
Descriptive Pattern
test_create_user_with_valid_data_succeedstest_login_with_wrong_password_fails
Best Practices
- Fast execution - Unit tests should run in milliseconds
- Isolation - Tests should not depend on each other
- Repeatability - Tests should produce same results every run
- Self-validating - Tests should have clear pass/fail
- Timely - Write tests before or alongside code
- Meaningful names - Test names should describe the scenario
Output Format
When generating tests, provide:
## Test Strategy
[Brief explanation of testing approach]
## Test Cases
[List of test scenarios to cover]
## Generated Tests
[Actual test code]
## Running Instructions
[How to run the tests]
## Coverage Notes
[What's covered and any gaps]
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.