viktor-silakov

ai-ready

0
0
# Install this skill:
npx skills add viktor-silakov/ai-ready --skill "ai-ready"

Install specific skill from multi-skill repository

# Description

Analyzes repositories for AI agent development efficiency. Scores 8 aspects (documentation, architecture, testing, type safety, agent instructions, file structure, context optimization, security) with ASCII dashboards. Use when evaluating AI-readiness, preparing codebases for Claude Code, or improving repository structure for AI-assisted development.

# SKILL.md


name: ai-ready
description: Analyzes repositories for AI agent development efficiency. Scores 8 aspects (documentation, architecture, testing, type safety, agent instructions, file structure, context optimization, security) with ASCII dashboards. Use when evaluating AI-readiness, preparing codebases for Claude Code, or improving repository structure for AI-assisted development.
user-invocable: true
argument-hint: [path-to-repo]


AI-Readiness Analysis

Evaluate repository readiness for AI-assisted development across 8 weighted aspects.

Workflow Checklist

Copy and track progress:

AI-Readiness Analysis Progress:
- [ ] Step 1: Discover repository
- [ ] Step 2: Gather user context (Q1-Q4)
- [ ] Step 3: Analyze 8 aspects
- [ ] Step 4: Calculate scores and grade
- [ ] Step 5: Display ASCII dashboard
- [ ] Step 6: Present issues by severity
- [ ] Step 7: Priority survey (Q5-Q9)
- [ ] Step 8: Enter plan mode
- [ ] Step 9: Create phased roadmap
- [ ] Step 10: Generate templates
- [ ] Step 11: Save reports to .aiready/ (confirm HTML generation)
- [ ] Step 12: Ask to open HTML report

Step 1: Repository Discovery

Target: {argument OR cwd}

Discover:
1. Language/Framework: Check package.json, Cargo.toml, go.mod, pyproject.toml
2. History: Check .aiready/history/index.json for delta tracking
3. Agent files: CLAUDE.md, AGENTS.md, .cursorrules, copilot-instructions.md


Step 2: Context Gathering

Use AskUserQuestion with these 4 questions:

Q Question Options
Q1 Rework depth? Quick Wins / Medium / Deep Refactor
Q2 Timeline? Urgent / Planned / Strategic / Continuous
Q3 Team size? Solo / Small (2-5) / Large (5+) / Open Source
Q4 AI tools used? Claude Code / Copilot / Cursor / Windsurf / Aider (multiselect)

Store responses for Steps 6 and 11.


Step 3: Analyze 8 Aspects

Evaluate each criterion 0-5-10. See criteria/aspects.md for full rubrics.

Aspect Weight Criteria
Documentation 15% 19
Architecture 15% 18
Testing 12% 23
Type Safety 12% 10
Agent Instructions 15% 25
File Structure 10% 13
Context Optimization 11% 20
Security 10% 12

Step 4: Calculate Scores

Aspect Score = (Sum of criteria / Max points) Γ— 100

Overall = (Doc Γ— 0.15) + (Arch Γ— 0.15) + (Test Γ— 0.12) + (Type Γ— 0.12)
        + (Agent Γ— 0.15) + (File Γ— 0.10) + (Context Γ— 0.11) + (Security Γ— 0.10)
Grade Range
A 90-100
B 75-89
C 60-74
D 45-59
F 0-44

Step 5: Display Dashboard

╔══════════════════════════════════════════════════════════════════════════════╗
β•‘                          AI-READINESS REPORT                                  β•‘
β•‘  Repository: {name} | Language: {lang} | Framework: {fw}                     β•‘
╠══════════════════════════════════════════════════════════════════════════════╣
β•‘  OVERALL GRADE: {X}     SCORE: {XX}/100     {delta}                          β•‘
╠══════════════════════════════════════════════════════════════════════════════╣
β•‘  1. Documentation       {bar} {score}/100 {delta}                            β•‘
β•‘  2. Architecture        {bar} {score}/100 {delta}                            β•‘
β•‘  3. Testing             {bar} {score}/100 {delta}                            β•‘
β•‘  4. Type Safety         {bar} {score}/100 {delta}                            β•‘
β•‘  5. Agent Instructions  {bar} {score}/100 {delta}                            β•‘
β•‘  6. File Structure      {bar} {score}/100 {delta}                            β•‘
β•‘  7. Context Optimization{bar} {score}/100 {delta}                            β•‘
β•‘  8. Security            {bar} {score}/100 {delta}                            β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

Progress bars: β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘ = 80/100 (β–ˆ filled, β–‘ empty, 10 chars total)

Deltas: ↑+N improvement | ↓-N decline | β†’0 unchanged | (new) first run

Issue Summary Block:

╔══════════════════════════════════════════════════════════════════════════════╗
β•‘                          ISSUE SUMMARY                                        β•‘
╠══════════════════════════════════════════════════════════════════════════════╣
β•‘   πŸ”΄ CRITICAL     {bar}  {N}                                                 β•‘
β•‘   🟑 WARNING      {bar}  {N}                                                 β•‘
β•‘   πŸ”΅ INFO         {bar}  {N}                                                 β•‘
β•‘   Distribution by Aspect: (sorted by issue count)                            β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

If history exists, show Progress Over Time chart with trend analysis.


Step 6: Present Issues

Group by severity, then aspect. See reference/severity.md for classification.

πŸ”΄ CRITICAL ({N})
──────────────────────────────────────────────────────────────────────
[C1] {Aspect}: {Issue}
     Impact: {description}
     Effort: Low/Medium/High

🟑 WARNING ({N})
──────────────────────────────────────────────────────────────────────
[W1] {Aspect}: {Issue}
     Impact: {description}

Step 7: Priority Survey

Use AskUserQuestion for prioritization:

Q Question Purpose
Q5 Priority areas (top 3)? Focus recommendations
Q6 Critical issue order? Prioritize fixes
Q7 Which warnings to fix? Scope work
Q8 Constraints? Legacy code, compliance, CI/CD
Q9 Success metrics? Target grade, zero critical

Filter by rework depth from Q1:
- Quick Wins β†’ Phase 1 only
- Medium β†’ Phases 1-2
- Deep β†’ All phases


Step 8: Enter Plan Mode

After survey, use EnterPlanMode tool.


Step 9: Phased Roadmap

Phase Focus Examples
1: Quick Wins File creation, config CLAUDE.md, .aiignore, llms.txt
2: Foundation Structural changes ARCHITECTURE.md, file splitting, types
3: Advanced Deep improvements Coverage >80%, ADRs, architecture enforcement

Step 10: Generate Templates

For selected issues, generate from templates:


Step 11: Save Reports

Before writing the HTML file, always ask the user:

AskUserQuestion:
  Question: "Generate HTML report now?"
  Options: ["Yes, generate HTML", "No, skip HTML"]

If "Yes", create the HTML report. If "No", skip HTML but still write Markdown/JSON.

Save to .aiready/history/reports/ with timestamp:

.aiready/
β”œβ”€β”€ config.json              # User preferences
β”œβ”€β”€ history/
β”‚   β”œβ”€β”€ index.json           # Report index for delta tracking
β”‚   └── reports/
β”‚       β”œβ”€β”€ {YYYY-MM-DD}_{HHMMSS}.md
β”‚       β”œβ”€β”€ {YYYY-MM-DD}_{HHMMSS}.html
β”‚       └── {YYYY-MM-DD}_{HHMMSS}.json

Markdown report: Scores, issues, recommendations, user context
HTML dashboard: See templates/report.html
JSON data: Raw scores for delta tracking

Update index.json with new report entry and trend analysis.

Open Report

If the HTML report was generated and saved, immediately ask:

AskUserQuestion:
  Question: "Open HTML report in browser?"
  Options: ["Yes, open report", "No, skip"]

If HTML was skipped, do not prompt to open. If yes, run:

open .aiready/history/reports/{timestamp}.html

Validation Loop

After each major step, verify:

  1. After analysis: All 8 aspects scored?
  2. After issues: Severity correctly classified?
  3. After survey: User selections captured?
  4. After templates: Files properly generated?
  5. After save: Reports written to .aiready/?

If validation fails, return to the failed step.


Quick Reference

File Content
criteria/aspects.md Full scoring rubrics for all 8 aspects
reference/severity.md Issue severity classification
templates/CLAUDE.md.template Agent instructions template
templates/ARCHITECTURE.md.template Architecture doc template
templates/report.html HTML dashboard template
examples/ Example reports

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.