Use when adding new error messages to React, or seeing "unknown error code" warnings.
npx skills add Nice-Wolf-Studio/wolf-skills-marketplace --skill "wolf-scripts-core"
Install specific skill from multi-skill repository
# Description
Core automation scripts for archetype selection, evidence validation, quality scoring, and safe bash execution
# SKILL.md
name: wolf-scripts-core
description: Core automation scripts for archetype selection, evidence validation, quality scoring, and safe bash execution
version: 1.1.0
category: automation
triggers:
- archetype selection
- evidence validation
- quality scoring
- curator rubric
- bash validation
dependencies:
- wolf-archetypes
- wolf-governance
size: medium
Wolf Scripts Core
Core automation patterns that power Wolf's behavioral adaptation system. These scripts represent battle-tested logic from 50+ phases of development.
Overview
This skill captures the essential automation patterns that run on nearly every Wolf operation:
- Archetype Selection - Automatically determine behavioral profile based on issue/PR characteristics
- Evidence Validation - Validate archetype-specific evidence requirements with conflict resolution
- Curator Rubric - Score issue quality using reproducible 1-10 scoring system
- Bash Validation - Safe bash execution with pattern checking and GitHub CLI validation
π― Archetype Selection Pattern
Purpose
Analyzes GitHub issues to determine the appropriate coder agent archetype based on labels, keywords, and patterns.
Available Archetypes
{
'product-implementer': {
keywords: ['feature', 'implement', 'add', 'create', 'build', 'develop', 'functionality'],
patterns: ['user story', 'as a user', 'acceptance criteria', 'business logic']
},
'reliability-fixer': {
keywords: ['bug', 'fix', 'error', 'crash', 'fail', 'broken', 'issue'],
patterns: ['steps to reproduce', 'error message', 'stack trace', 'regression']
},
'security-hardener': {
keywords: ['security', 'vulnerability', 'exploit', 'auth', 'permission', 'access'],
patterns: ['cve', 'security scan', 'penetration', 'authentication', 'authorization']
},
'perf-optimizer': {
keywords: ['performance', 'slow', 'optimize', 'speed', 'memory', 'cpu'],
patterns: ['benchmark', 'profiling', 'latency', 'throughput', 'bottleneck']
},
'research-prototyper': {
keywords: ['research', 'prototype', 'experiment', 'proof of concept', 'explore'],
patterns: ['investigation', 'feasibility', 'spike', 'technical debt', 'architecture']
}
}
Scoring Algorithm
- Keyword Matching:
score += matches.length * 2(weight keywords highly) - Pattern Matching:
score += matches.length * 3(weight patterns even higher) - Label Matching: Labels are included in full text search
- Threshold Check: Requires minimum confidence score to select archetype
- Fallback: If no archetype reaches threshold, defaults to
product-implementer
Usage Pattern
// Score all archetypes
const scores = scoreArchetypes(title, body, labels);
// Find highest score
const winner = Object.entries(scores)
.sort((a, b) => b[1] - a[1])[0];
// Confidence check
const confidence = (scores[winner[0]] / totalScore) * 100;
if (confidence < CONFIDENCE_THRESHOLD) {
// Use fallback or request human review
}
When to Use
- Starting new work items
- Issue triage and routing
- Determining agent behavioral profile
- Validating archetype assignments
Script Location: /agents/shared/scripts/select-archetype.mjs
π Evidence Validation Pattern
Purpose
Validates evidence requirements from multiple sources (archetypes, lenses) with priority-based conflict resolution.
Priority Levels
const PRIORITY_LEVELS = {
DEFAULT: 0, // Basic requirements
LENS: 1, // Overlay lens requirements
ARCHETYPE: 2, // Core archetype requirements
OVERRIDE: 3 // Explicit overrides
};
Conflict Resolution Strategies
- Priority-Based: Higher priority wins
- Merge Strategy: Same priority β union of requirements
- Conflict Tracking: All conflicts logged for audit
- Resolution Recording: Documents why each resolution was made
Schema Versions (Backward Compatibility)
- V1 (1.0.0): Original format
- V2 (2.0.0): Added priority field
- V3 (3.0.0): Added conflict resolution
Usage Pattern
class EvidenceValidator {
constructor() {
this.requirements = new Map();
this.conflicts = [];
this.resolutions = [];
}
// Add requirements from source
addRequirements(source, requirements, priority) {
// Detect conflicts
// Resolve based on priority or merge
// Track resolution decision
}
// Merge two requirement values
mergeRequirements(existing, incoming) {
// Arrays: union
// Objects: deep merge
// Numbers: max
// Booleans: logical OR
// Strings: concatenate with separator
}
// Validate against requirements
validate(evidence) {
// Check all requirements met
// Return validation report
}
}
When to Use
- Before creating PRs (ensure evidence collected)
- During PR review (validate evidence submitted)
- When combining archetype + lens requirements
- Resolving conflicts between requirement sources
Script Location: /agents/shared/scripts/evidence-validator.mjs
π Curator Rubric Pattern
Purpose
Reproducible 1-10 scoring system for issue quality using weighted rubric across 5 categories.
Rubric Categories (100 points total)
1. Problem Definition (25 points)
- Problem Stated (5pts): Clear problem statement exists
- User Impact (5pts): User/system impact described
- Root Cause (5pts): Root cause identified or investigated
- Constraints (5pts): Constraints and limitations noted
- Success Metrics (5pts): Success criteria measurable
2. Acceptance Criteria (25 points)
- Testable (8pts): AC is specific and testable
- Complete (7pts): Covers happy and edge cases
- Prioritized (5pts): Must/should/nice clearly separated
- Given/When/Then (5pts): Uses Given/When/Then format
3. Technical Completeness (20 points)
- Dependencies (5pts): Dependencies identified
- Risks (5pts): Technical risks assessed
- Architecture (5pts): Architecture approach defined
- Performance (5pts): Performance considerations noted
4. Documentation Quality (15 points)
- Context (5pts): Sufficient context provided
- References (5pts): Links to related issues/docs
- Examples (5pts): Examples or mockups included
5. Process Compliance (15 points)
- Labels (5pts): Appropriate labels applied
- Estimates (5pts): Effort estimated (S/M/L/XL)
- Priority (5pts): Priority clearly indicated
Score Conversion (100 β 10 scale)
function convertTo10Scale(rawScore) {
return Math.round(rawScore / 10);
}
// Score ranges:
// 90-100 β 10 (Exceptional)
// 80-89 β 8-9 (Excellent)
// 70-79 β 7 (Good)
// 60-69 β 6 (Acceptable)
// 50-59 β 5 (Needs improvement)
// <50 β 1-4 (Poor)
Usage Pattern
const rubric = new CuratorRubric();
// Score an issue
const score = rubric.scoreIssue(issueNumber);
// Get detailed breakdown
const breakdown = rubric.getScoreBreakdown(issueNumber);
// Post score as comment
rubric.postScoreComment(issueNumber, score, breakdown);
// Track score history
rubric.saveScoreHistory();
When to Use
- During intake curation
- Before moving issues to pm-ready
- Quality gate enforcement
- Identifying patterns of good/poor curation
Script Location: /agents/shared/scripts/curator-rubric.mjs
π‘οΈ Bash Validation Pattern
Purpose
Safe bash execution with syntax checking, pattern validation, and GitHub CLI validation.
Validation Layers
- Shellcheck: Syntax and best practices validation
- Pattern Checking: Custom rules for common anti-patterns
- GitHub CLI: Validate
ghcommand usage - Dry Run: Test commands before execution
Configuration
const config = {
shellcheck: {
enabled: true,
format: 'json',
severity: ['error', 'warning', 'info']
},
patterns: {
enabled: true,
customRules: 'bash-patterns.json'
},
github: {
validateCLI: true,
dryRun: true
},
output: {
format: 'detailed', // or 'json', 'summary'
exitOnError: true
}
};
Command Line Options
# Validate single file
bash-validator.mjs --file script.sh
# Validate directory
bash-validator.mjs --directory ./scripts
# Validate staged files (pre-commit)
bash-validator.mjs --staged
# Validate workflows
bash-validator.mjs --workflows
# Comprehensive scan
bash-validator.mjs --comprehensive
# Update pattern rules
bash-validator.mjs --update-patterns
Severity Levels
- error: Critical issues that will cause failures
- warning: Potential issues, best practice violations
- info: Suggestions for improvement
Usage Pattern
class BashValidator {
constructor(options) {
this.options = { ...config, ...options };
this.results = {
files: [],
summary: { total: 0, passed: 0, failed: 0 }
};
}
// Validate file
validateFile(filepath) {
// Run shellcheck
// Check custom patterns
// Validate GitHub CLI usage
// Return validation report
}
// Aggregate results
getSummary() {
// Return summary statistics
}
}
Common Anti-Patterns Detected
- Unquoted variables
- Missing error handling
- Unsafe command substitution
- Race conditions in pipelines
- Missing shellcheck directives
When to Use
- Pre-commit hooks for bash scripts
- CI/CD validation of shell scripts
- Workflow validation
- Before executing user-provided bash
Script Location: /agents/shared/scripts/bash-validator.mjs
Integration Patterns
Combining Scripts in Workflows
Example 1: Issue Intake Pipeline
// 1. Score issue quality
const qualityScore = curatorRubric.scoreIssue(issueNumber);
// 2. If quality sufficient, select archetype
if (qualityScore >= 6) {
const archetype = selectArchetype(issueNumber);
// 3. Load archetype evidence requirements
const requirements = loadArchetypeRequirements(archetype);
// 4. Validate requirements
const validator = new EvidenceValidator();
validator.addRequirements(archetype, requirements, PRIORITY_LEVELS.ARCHETYPE);
}
Example 2: PR Validation Pipeline
// 1. Get archetype from PR labels
const archetype = extractArchetypeFromLabels(prLabels);
// 2. Load evidence requirements (archetype + lenses)
const validator = new EvidenceValidator();
validator.addRequirements(archetype, archetypeRequirements, PRIORITY_LEVELS.ARCHETYPE);
// Apply lenses if present
if (hasPerformanceLens) {
validator.addRequirements('performance-lens', perfRequirements, PRIORITY_LEVELS.LENS);
}
// 3. Validate evidence submitted
const validationReport = validator.validate(prEvidence);
// 4. Block merge if validation fails
if (!validationReport.passed) {
postComment(prNumber, validationReport.message);
setStatus('failure');
}
Example 3: Safe Script Execution
// 1. Validate bash script
const bashValidator = new BashValidator();
const validationResult = bashValidator.validateFile(scriptPath);
// 2. Only execute if validation passed
if (validationResult.passed) {
execSync(scriptPath);
} else {
console.error('Validation failed:', validationResult.errors);
process.exit(1);
}
Related Skills
- wolf-archetypes: Archetype definitions and registry
- wolf-lenses: Lens overlay requirements
- wolf-governance: Governance policies and quality gates
- wolf-scripts-agents: Agent coordination scripts (orchestration, execution)
File Locations
All core scripts are in /agents/shared/scripts/:
- select-archetype.mjs - Archetype selection logic
- evidence-validator.mjs - Evidence validation with conflict resolution
- curator-rubric.mjs - Issue quality scoring
- bash-validator.mjs - Safe bash execution validation
Best Practices
Archetype Selection
- β Always check confidence score before auto-assignment
- β Log archetype selection rationale for audit
- β Fall back to human review if low confidence
- β Don't skip label analysis
- β Don't ignore pattern matching
Evidence Validation
- β Always track conflict resolutions
- β Document why conflicts were resolved specific ways
- β Validate backward compatibility when updating schemas
- β Don't silently drop conflicting requirements
- β Don't ignore priority levels
Curator Rubric
- β Provide detailed score breakdown with comments
- β Track score history for trend analysis
- β Use consistent scoring criteria
- β Don't auto-approve low-scoring issues
- β Don't skip any rubric categories
Bash Validation
- β Always validate before execution
- β Use dry-run mode for testing
- β Check comprehensive mode for critical scripts
- β Don't bypass validation for "simple" scripts
- β Don't ignore warnings in production code
Red Flags - STOP
If you catch yourself thinking:
- β "Skipping automated checks to save time" - STOP. Automation exists because manual checks fail. Scripts catch what humans miss. Use the automation.
- β "Manual validation is good enough" - NO. Manual validation is inconsistent and error-prone. Scripts provide reproducible validation every time.
- β "Scripts are just helpers, not requirements" - Wrong. These scripts encode battle-tested logic from 50+ phases. They ARE requirements.
- β "I can select archetypes manually faster" - False. Manual selection misses patterns and lacks confidence scoring. Use
select-archetype.mjs. - β "Evidence validation can wait until PR review" - FORBIDDEN. Waiting until PR review wastes reviewer time. Validate BEFORE creating PR.
- β "Curator rubric scoring is optional" - NO. Quality gates depend on rubric scores. All issues must be scored before pm-ready.
STOP. Use the appropriate automation script BEFORE proceeding.
After Using This Skill
REQUIRED NEXT STEPS:
Integration with Wolf skill chain
- RECOMMENDED SKILL: Use wolf-archetypes to understand archetype definitions
- Why: Scripts automate archetype selection. Understanding archetypes ensures correct interpretation of results.
- When: After using
select-archetype.mjsto understand selected archetype's requirements -
Tool: Use Skill tool to load wolf-archetypes
-
RECOMMENDED SKILL: Use wolf-governance to understand quality gates
- Why: Scripts enforce governance. Understanding gates ensures compliance.
- When: After using
curator-rubric.mjsorevidence-validator.mjs -
Tool: Use Skill tool to load wolf-governance
-
DURING WORK: Scripts provide continuous automation
- Scripts are called throughout workflow (intake, validation, execution)
- No single "next skill" - scripts integrate into existing chains
- Use scripts at appropriate workflow stages
Verification Checklist
Before claiming script-based automation complete:
- [ ] Used appropriate automation script for task (archetype selection, evidence validation, rubric scoring, or bash validation)
- [ ] Validated confidence scores before proceeding (for archetype selection, require >70% confidence)
- [ ] Documented script execution and results in journal or PR description
- [ ] Evidence requirements tracked and validated (for evidence-validator usage)
- [ ] No validation warnings ignored (all errors and warnings addressed)
Can't check all boxes? Automation incomplete. Return to this skill.
Good/Bad Examples: Script Usage
Example 1: Archetype Selection
Issue #456: Add rate limiting to API endpoints
Labels: feature, performance
Script Execution:
$ node select-archetype.mjs --issue 456
Results:
product-implementer: 45% (keywords: add, feature)
perf-optimizer: 72% (keywords: performance, rate limiting; patterns: throughput)
Selected: perf-optimizer (confidence: 72%)
β
Confidence above threshold (70%)
Agent Action:
β
Accepted perf-optimizer archetype
β
Loaded perf-optimizer evidence requirements (benchmarks, profiling, performance tests)
β
Documented selection rationale in journal
β
Proceeded with performance-focused implementation
Why this is correct:
- Used automation script instead of manual guess
- Validated confidence score before accepting
- Selected archetype matches work characteristics (performance focus)
- Evidence requirements automatically loaded
Issue #457: Fix login button
Labels: bug
Manual Selection: "It's obviously a bug fix, so reliability-fixer"
Problems:
β Skipped automation script
β No confidence scoring
β Missed that issue title/description mention "button doesn't display" (could be CSS issue = maintainability-refactorer)
β No evidence requirements loaded
β No documentation of selection rationale
What Should Have Been Done:
$ node select-archetype.mjs --issue 457
Results:
reliability-fixer: 38% (keywords: fix, bug)
maintainability-refactorer: 54% (patterns: display issue, CSS)
Selected: maintainability-refactorer (confidence: 54%)
β οΈ Low confidence - recommend human review
Outcome: Agent would have identified this as UI/styling issue, not logic bug.
Example 2: Evidence Validation Workflow
PR #789: Optimize database query performance
Archetype: perf-optimizer
Lenses: observability
Script Execution:
$ node evidence-validator.mjs --archetype perf-optimizer --lenses observability
Loading requirements...
β
Archetype requirements (priority: 2): benchmarks, profiling, performance tests
β
Observability lens (priority: 1): metrics, monitoring, alerting
Validating evidence...
β
Benchmarks provided: before/after query times
β
Profiling data: flame graph showing bottleneck
β
Performance tests: 47 tests passing, 15% latency improvement
β
Metrics: added query_duration_ms metric
β
Monitoring: added query performance dashboard
β
Alerting: added slow query alert (>500ms)
All requirements met β
Agent Action:
β
Evidence validator ran before PR creation
β
All requirements from archetype + lens validated
β
PR included complete evidence package
β
Reviewer approved without requesting additional evidence
Why this is correct:
- Automated validation caught all requirements
- Combined archetype + lens requirements properly
- Evidence complete before PR review
- No wasted reviewer time
PR #790: Add caching layer
Archetype: perf-optimizer
Lenses: security
Manual Check: "I added benchmarks, should be good"
Problems:
β Skipped evidence-validator script
β Didn't realize security lens adds requirements (threat model, security scan)
β Missing security evidence for caching layer
β Missing several perf-optimizer requirements (profiling, comprehensive tests)
PR Review:
β Reviewer requested: threat model for cache poisoning
β Reviewer requested: security scan for cache key vulnerabilities
β Reviewer requested: comprehensive performance tests
β Reviewer requested: cache eviction profiling
Outcome: 3 review cycles, 2 weeks delay, demoralized team
What Should Have Been Done:
$ node evidence-validator.mjs --archetype perf-optimizer --lenses security
β Validation failed:
Missing: Threat model (security lens requirement)
Missing: Security scan (security lens requirement)
Missing: Profiling data (archetype requirement)
Missing: Cache eviction tests (archetype requirement)
Provided: Basic benchmarks only
Evidence incomplete. Address missing requirements before PR creation.
If script had been used: All requirements identified upfront, evidence collected before PR, single review cycle.
Last Updated: 2025-11-14
Phase: Superpowers Skill-Chaining Enhancement v2.0.0
Maintainer: Wolf Automation Team
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.