alistaircroll

verbose-deployment

0
0
# Install this skill:
npx skills add alistaircroll/verbose-deployment --skill "verbose-deployment"

Install specific skill from multi-skill repository

# Description

Full CI/CD pipeline — runs project inventory, dependency checks, tests, build, security, deployment, production verification, and generates an HTML report with history tracking. Use when deploying, running CI/CD, or verifying a full deployment chain.

# SKILL.md


name: verbose-deployment
description: "Full CI/CD pipeline — runs project inventory, dependency checks, tests, build, security, deployment, production verification, and generates an HTML report with history tracking. Use when deploying, running CI/CD, or verifying a full deployment chain."


Verbose Deployment

Run the complete deployment pipeline from project inventory through production verification. Collects diagnostics, timestamps, file sizes, and error counts at each phase. Produces a multi-page HTML report with a history timeline and opens it in the browser.

Announce at start: "Running verbose deployment for {project name}." — derive the project name from package.json name, Cargo.toml [package] name, pyproject.toml [project] name, the repo directory name, or CLAUDE.md title. This name appears in the nav bar and title of the HTML report.

Phases

Execute these skills in order. Each phase collects metrics. Stop on any CRITICAL failure and fix before proceeding.

# Skill Purpose
1 project-inventory Baseline: files, LOC, recency, git state
2 dependencies Upgrade packages, fix vulnerabilities, apply migrations
3 unit-tests Run unit test suite
4 build Production build + type checking
5 e2e-tests Integration/E2E tests against emulators or local services
6 security-review Secrets scan, audit, pre-commit hook
7 push-to-remote Git push to deployment remote
8 verify-deployment Poll hosting platform until READY
9 production-verification Browser automation against live production
10 deployment-report Save JSON + generate HTML report with history

Read each sub-skill for detect logic, collect instructions, and stop conditions.

Principles

  1. Fix before proceeding. If any phase uncovers a problem — failing test, security vulnerability with an available fix, broken UI — fix it and restart from Phase 1. The point of testing is to catch problems; skipping past them defeats the purpose.
  2. No retry loops. If a command fails twice, stop and diagnose the root cause. Don't run the same test 5 times hoping for a different outcome.
  3. Clean slate on restart. Kill zombie processes, verify ports are free. If code changed, restart from Phase 1. If only infrastructure failed (dev server crash, port conflict) with no code changes, kill processes and re-run just the failed phase.
  4. Evidence at every step. Collect metrics, timestamps, and error counts. The HTML report IS the evidence.
  5. Adapt to context. Not every project has Vercel, Firebase, or Playwright. Detect what's available and skip phases that don't apply, noting them as N/A in the report.

Self-Improvement

This skill is a living document. Every pipeline run teaches something. After each run, evaluate whether any skill file needs updating:

When to update:
- A phase produced a false positive or false negative. Add validation to prevent recurrence.
- A shell command failed due to platform differences. Replace with a portable alternative and update shell-portability.md.
- A new test type, deployment target, or verification step was discovered. Add it to the relevant sub-skill.
- A bypass or mock masked a real failure. Add a validation step and document it in lessons-learned.md.
- The production smoke test didn't catch a bug a real user would encounter. Expand the production-verification skill.

How to update: Edit the relevant skill file directly. Add new steps, refine existing ones, fix commands. The goal is that each run is more thorough than the last.

See lessons-learned.md for cross-phase patterns and shell-portability.md for macOS/Linux command notes.

The Verification Chain

The pipeline verifies the entire path from source files to validated human usability:

  • Unit tests verify logic in isolation.
  • The build verifies the code compiles and type-checks.
  • E2E tests verify full user flows against local services or emulators.
  • Security review verifies no secrets or vulnerabilities ship.
  • Production smoke tests verify the deployed app works as a real user would experience it.
  • Every mock, stub, emulator, or test shortcut is a place where the test can diverge from reality. Each bypass must be validated by a higher-level test that exercises the real thing. When you discover a bypass that allowed a bug to ship, add a test that covers the real path and update the relevant skill.
  • Use browser automation for production verification. Playwright, Cypress, or MCP browser tools should interact with the production deployment the way a human would: clicking buttons, filling forms, following links, scrolling, uploading and downloading files, completing multi-step workflows, and reading rendered output. If the app is a game, play it. If it's a dashboard, navigate all pages. If it renders icons or images, verify they render visually — don't just check that a CSS class or alt-text string is present in the DOM.

Report Storage

Reports are saved to a project-appropriate location. The deployment-report skill handles file naming, JSON companion files, history loading, and retention pruning. See that skill for details.

Default location: e2e/results/, reports/, or test-results/ — whichever exists in the project. If none exist, create reports/.

Retention: Keep the 50 most recent JSON and HTML files. Prune automatically during report generation.

Iron Rules

  1. Fix issues the pipeline uncovers, then restart. Never label a fixable problem as "non-blocking."
  2. ALL phases must pass before declaring success. A partial pass is a fail.
  3. Never push broken code. Phases 3-6 are the local gate.
  4. Never skip the production build. It catches type errors that test runners miss.
  5. Never declare "deployed" without platform confirmation.
  6. Never retry more than twice. After two failures, diagnose root cause.
  7. Clean slate on every restart. Kill processes, verify ports, run from Phase 1.
  8. The report is the evidence. Every phase generates data. The HTML report captures it all.
  9. Name the project. The report title, nav bar, and browser tab must identify which project this ran for.
  10. Verify what real users experience. A test that checks string content but not rendered UI is incomplete.
  11. Improve these skills after every run. If a command broke or a bypass masked a bug, update the relevant skill file.
  12. The report must say what the pipeline changed. The Overview page must list dependency upgrades, migrations, security fixes, test fixes, and any code changes the pipeline made.
  13. Verify rendered output, not just HTTP status. A 200 response is necessary but not sufficient. Verify that fonts, icons, stylesheets, and images actually render.

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.