Implement GitOps workflows with ArgoCD and Flux for automated, declarative Kubernetes...
npx skills add tejasgadhia/tg-claude-skills --skill "tg-review"
Install specific skill from multi-skill repository
# Description
Use when at major project milestones, before sharing/deploying, or when user says "review this", "check my work", "is this ready", or similar quality gate requests. Orchestrates specialized review skills and produces scored comprehensive report.
# SKILL.md
name: tg-review
description: Use when at major project milestones, before sharing/deploying, or when user says "review this", "check my work", "is this ready", or similar quality gate requests. Orchestrates specialized review skills and produces scored comprehensive report.
Milestone Quality Review - Smart Orchestrator
Overview
Intelligent review orchestrator that analyzes project type, invokes relevant specialized skills automatically, aggregates outputs into scored report, and generates prioritized improvements. Designed for Tejas's workflow: catch issues before deployment, ensure production quality.
The Iron Law
ALWAYS use specialized skills, never manual review. Orchestrate automatically.
User says "quick review"? Specialized skills are faster and more thorough.
User says "just check X"? Run full review, prioritize X in output.
Only one aspect needs review? Still run full orchestration, filter results after.
When to Use
digraph when_to_use {
"At milestone/completion?" [shape=diamond];
"Before public sharing?" [shape=diamond];
"Use tg-review skill" [shape=box];
"Not a quality gate" [shape=box];
"At milestone/completion?" -> "Use tg-review skill" [label="yes"];
"At milestone/completion?" -> "Before public sharing?" [label="no"];
"Before public sharing?" -> "Use tg-review skill" [label="yes"];
"Before public sharing?" -> "Not a quality gate" [label="no"];
}
Trigger phrases:
- "Review this project"
- "Is this ready to deploy?"
- "Check my work"
- "What needs improvement?"
- "Production-ready?"
- Before creating PR or sharing publicly
The Workflow
Step 1: Analyze Project Type
Examine project files to determine what specialized skills are relevant:
**IF** project has HTML/CSS/UI files:
→ INVOKE: ui-design:accessibility-expert
→ CHECK: Dark mode implementation, WCAG compliance, keyboard nav
**IF** project handles user input OR has security implications:
→ INVOKE: security-scanning:security-auditor
→ CHECK: XSS, injection vulnerabilities, secrets exposure
**IF** project has 5+ code files OR complex architecture:
→ INVOKE: comprehensive-review:architect-review
→ CHECK: Architecture quality, scalability, maintainability
**IF** project has substantial code (not just HTML):
→ INVOKE: comprehensive-review:code-reviewer
→ CHECK: Code quality, performance, error handling
**ALWAYS**:
→ REVIEW: Documentation (README, CLAUDE.md)
→ REVIEW: Tejas Standards compliance
→ REVIEW: Deployment readiness
Step 2: Invoke Specialized Skills (Parallel)
Use Task tool to invoke relevant skills in parallel for speed:
## Example Orchestration
For flight tracker with index.html, styles.css, app.js:
[Invoke in parallel]:
- Task(ui-design:accessibility-expert) → audit UI
- Task(security-scanning:security-auditor) → scan code
- Task(comprehensive-review:code-reviewer) → review code quality
[Wait for all to complete, then aggregate]
Important: Each skill produces detailed output. Capture ALL outputs for aggregation.
Step 3: Aggregate Outputs Into Scored Report
Synthesize specialized skill outputs into unified report:
Report Structure
# Project Review: [Project Name]
Generated: [Date]
## Overall Score: [0-100]/100 [✅ Excellent | ✓ Good | ⚠️ Needs Improvement | ❌ Critical Issues]
---
## Section Scores
### Documentation: [0-100]/100 [Status Icon]
**Sources**: Manual review of README.md, CLAUDE.md, inline docs
- ✅ [What's good]
- ⚠️ [What needs improvement]
- ❌ [Critical issues]
**Score Rationale**: [Why this score]
---
### Code Quality: [0-100]/100 [Status Icon]
**Sources**: comprehensive-review:code-reviewer output
- ✅ [Strengths from code-reviewer]
- ⚠️ [Issues identified]
- ❌ [Critical problems]
**Score Rationale**: [Based on code-reviewer findings]
---
### Design & UX: [0-100]/100 [Status Icon]
**Sources**: ui-design:accessibility-expert output + manual design review
- ✅ [Accessibility compliance]
- ⚠️ [Design issues - dark mode, info density, etc.]
- ❌ [Critical UX problems]
**Score Rationale**: [Based on a11y-expert + Tejas standards]
---
### Security: [0-100]/100 [Status Icon]
**Sources**: security-scanning:security-auditor output
- ✅ [Security strengths]
- ⚠️ [Vulnerabilities found]
- ❌ [Critical security issues]
**Score Rationale**: [Based on security-auditor findings]
---
### Functionality: [0-100]/100 [Status Icon]
**Sources**: Manual testing + code review insights
- ✅ [Core features working]
- ⚠️ [Edge cases, missing features]
- ❌ [Broken functionality]
**Score Rationale**: [Based on testing]
---
### Deployment: [0-100]/100 [Status Icon]
**Sources**: Manual check of GitHub Pages, git status, demo link
- ✅ [Deployment strengths]
- ⚠️ [Deployment issues]
- ❌ [Critical deployment problems]
**Score Rationale**: [Based on deployment check]
---
### Tejas Standards Compliance: [0-100]/100 [Status Icon]
**Sources**: Manual check against Tejas's non-negotiables
- ✅ 100% client-side: [Yes/No]
- ✅ Vanilla JS (or justified alternative): [Yes/No]
- ✅ No unnecessary build tools: [Yes/No]
- ✅ Privacy-first: [Yes/No]
- ✅ No emojis in code/docs: [Yes/No]
- ✅ Information density (no hidden content): [Yes/No]
- ✅ No vertical scrollbars on landing: [Yes/No]
- ✅ Dark mode toggle: [Yes/No]
- ✅ Accessibility (WCAG AA): [Yes/No]
**Score Rationale**: [Compliance assessment]
---
## Priority Improvements
### HIGH PRIORITY (Must Fix Before Sharing)
[Numbered list of critical issues with file:line references]
1. **[Issue Title]** (Section: [Code/Design/Security])
- File: `file.ext` lines X-Y
- Issue: [Specific problem]
- Fix: [Specific solution]
- Impact: [Why this is HIGH priority]
### MEDIUM PRIORITY (Should Fix Soon)
[Issues that improve quality but aren't blockers]
### LOW PRIORITY (Nice to Have)
[Polish and enhancements]
---
## Before Sharing Publicly
Checklist:
- [ ] Fix all HIGH priority issues
- [ ] Test in both light and dark modes
- [ ] Verify demo link works in incognito window
- [ ] Take screenshots for README
- [ ] Run accessibility audit again after fixes
---
## Next Session Prompt
Copy this for your next session:
> "Continue working on [project-name]. Milestone review completed - focus on HIGH priority improvements: [list top 3 issues with file:line refs]. Review report saved at REVIEW-REPORT-[date].md."
Scoring Guidelines
90-100: Excellent - Production ready, minor polish only
75-89: Good - Solid quality, some improvements needed
60-74: Needs Improvement - Multiple issues to address
0-59: Critical Issues - Significant problems, not ready
Overall Score Calculation:
- Documentation: 10%
- Code Quality: 25%
- Design & UX: 20%
- Security: 20%
- Functionality: 15%
- Deployment: 5%
- Tejas Standards: 5%
Step 4: Save Report & Generate Next-Session Prompt
Save report as REVIEW-REPORT-[YYYY-MM-DD].md in project directory.
Generate next-session prompt that includes:
- Project name
- Top 3 HIGH priority issues
- File:line references for quick navigation
- Reference to review report location
Common Rationalizations (STOP SIGNS)
| Excuse | Reality | What to Do |
|---|---|---|
| "User wants quick review" | Specialized skills ARE quick (run in parallel) | Invoke skills, don't manual review |
| "Only checking one thing" | Other issues might exist | Full review, filter output after |
| "Manual review is faster" | Manual review misses issues | Always use specialized skills |
| "Skills might be overkill" | Skills find issues humans miss | Invoke them anyway |
| "I'll just read the code" | Code reading != systematic review | Use code-reviewer skill |
| "Accessibility seems fine" | Visual check != a11y audit | Use accessibility-expert skill |
Red Flags - STOP and Use Specialized Skills
If you're thinking ANY of these, STOP and orchestrate skills:
- "Let me read through the code..."
- "I'll check for security issues..."
- "Looks accessible to me..."
- "The design seems good..."
ALL of these mean: Use specialized skills instead of manual review.
Success Criteria
✅ Skills orchestrated - Relevant specialized skills invoked automatically
✅ Parallel execution - Skills run concurrently when possible
✅ Comprehensive aggregation - All skill outputs synthesized into report
✅ Scored sections - Each section has 0-100 score with rationale
✅ Prioritized improvements - HIGH/MEDIUM/LOW with file:line refs
✅ Report saved - REVIEW-REPORT-[date].md in project directory
✅ Next-session prompt - Ready to copy/paste for continuity
Example: Good Orchestration
User Request
"I finished my flight tracker MVP. Review it before I share?"
Orchestrator Response
Let me run a comprehensive quality review using specialized skills...
## Analyzing Project
I found:
- `index.html`, `styles.css`, `app.js` → UI component ✓
- User input handling (flight form) → Security check ✓
- 3 code files → Architecture review ✓
- Dark mode implementation → Accessibility audit ✓
## Invoking Specialized Skills (Parallel)
[Starting 4 concurrent reviews...]
1. ui-design:accessibility-expert → auditing WCAG compliance
2. security-scanning:security-auditor → scanning for vulnerabilities
3. comprehensive-review:code-reviewer → reviewing code quality
4. Manual checks → documentation, deployment, Tejas standards
[All reviews complete in 2 minutes...]
## Aggregating Results
[Synthesizes all outputs into scored report...]
# Project Review: Flight Tracker MVP
Generated: 2026-01-23
## Overall Score: 82/100 ✓ Good
---
## Section Scores
### Documentation: 85/100 ✓ Good
**Sources**: Manual review
- ✅ README.md exists with clear description
- ✅ Live demo link functional
- ⚠️ CLAUDE.md missing deployment commands
- ⚠️ No inline help tooltips in UI
**Score Rationale**: Good documentation foundation, minor gaps in developer docs and UX help.
---
### Code Quality: 80/100 ✓ Good
**Sources**: comprehensive-review:code-reviewer
- ✅ Clean single-file architecture appropriate for scope
- ✅ Good separation of concerns (data, rendering, UI)
- ⚠️ Missing error handling in CSV parser (app.js:145)
- ⚠️ No input validation on file upload
- ✅ Performance is good, no obvious bottlenecks
**Score Rationale**: Solid code structure, needs error handling improvements.
---
### Design & UX: 72/100 ⚠️ Needs Improvement
**Sources**: ui-design:accessibility-expert + manual review
- ⚠️ Dark mode color contrast fails WCAG AA (styles.css:45-60)
- ❌ Date range filter hidden in hamburger menu (violates info density)
- ✅ Classy aesthetic, timeless design
- ⚠️ Missing ARIA labels on 3 interactive controls
- ✅ Keyboard navigation works
- ⚠️ No focus indicators on buttons
**Score Rationale**: Good design foundation, but accessibility and information density issues need addressing.
---
### Security: 90/100 ✅ Excellent
**Sources**: security-scanning:security-auditor
- ✅ No XSS vulnerabilities detected
- ✅ Client-side only, no backend exposure
- ✅ No hardcoded secrets
- ⚠️ CSV parser could be more defensive (but low risk)
- ✅ Uses textContent over innerHTML appropriately
**Score Rationale**: Strong security posture, only minor hardening suggestions.
---
### Functionality: 88/100 ✓ Good
**Sources**: Manual testing
- ✅ All core features working correctly
- ✅ Flight form submission works
- ✅ Charts render properly
- ✅ Example data helpful
- ⚠️ No error message when invalid CSV uploaded
- ✅ Edge cases handled well
**Score Rationale**: Solid functionality, minor UX improvement on error states.
---
### Deployment: 85/100 ✓ Good
**Sources**: Manual check
- ✅ GitHub Pages configured and working
- ✅ Demo link functional
- ⚠️ Load time 2.3s (could optimize chart library loading)
- ✅ All files committed
- ✅ No uncommitted changes
**Score Rationale**: Deployment works well, minor performance optimization opportunity.
---
### Tejas Standards Compliance: 89/100 ✓ Good
**Sources**: Manual check
- ✅ 100% client-side ✓
- ✅ Vanilla JS, no build tools ✓
- ✅ Privacy-first, no tracking ✓
- ✅ No emojis ✓
- ⚠️ Information density issue (hamburger menu)
- ⚠️ Dark mode has contrast issues
- ⚠️ Accessibility gaps (ARIA labels)
**Score Rationale**: Meets most standards, needs fixes for info density and accessibility.
---
## Priority Improvements
### HIGH PRIORITY (Must Fix Before Sharing)
1. **Fix Dark Mode Contrast Failures** (Design & UX)
- File: `styles.css` lines 45-60
- Issue: Text color `#666` on dark background fails WCAG AA (contrast 3.2:1, needs 4.5:1)
- Fix: Change `#666` to `#999` for body text, `#CCC` for headings
- Impact: Accessibility compliance, readability for users with visual impairments
2. **Show Filters Upfront** (Design & UX)
- File: `index.html` line 89
- Issue: Date range filter hidden in hamburger menu, violates information density principle
- Fix: Display filters above chart by default, remove hamburger menu
- Impact: Users can access all controls immediately without clicking
3. **Add Error Handling to CSV Parser** (Code Quality)
- File: `app.js` line 145
- Issue: Malformed CSV crashes app with no user feedback
- Fix: Wrap parsing in try-catch, show user-friendly error toast
- Impact: Prevents crashes, improves UX when users upload bad data
### MEDIUM PRIORITY (Should Fix Soon)
4. Add ARIA labels to form controls (Accessibility)
5. Add focus indicators to buttons (Accessibility)
6. Optimize Chart.js lazy loading (Performance)
7. Add deployment commands to CLAUDE.md (Documentation)
### LOW PRIORITY (Nice to Have)
8. Add loading spinner during CSV processing (UX polish)
9. Add tooltips for flight status icons (UX help)
---
## Before Sharing Publicly
- [ ] Fix HIGH priority issues (dark mode, filters, error handling)
- [ ] Test in dark mode with contrast checker
- [ ] Verify all controls keyboard-accessible
- [ ] Take screenshots for README
- [ ] Test with large CSV file (500+ flights)
---
## Next Session Prompt
Copy this for your next session:
> "Continue working on flight-tracker. Milestone review completed - focus on HIGH priority improvements:
> 1. Dark mode contrast (styles.css:45-60, change #666 to #999)
> 2. Show filters upfront (index.html:89, remove hamburger menu)
> 3. CSV parser error handling (app.js:145, add try-catch with user message)
>
> Review report: REVIEW-REPORT-2026-01-23.md"
---
Report saved to: `~/Claude/flight-tracker/REVIEW-REPORT-2026-01-23.md`
Result: User gets comprehensive, actionable review with clear priorities and next steps.
Real-World Impact
Before this skill: Manual reviews miss issues, inconsistent depth, no systematic approach, user unsure what to fix first.
After this skill: Specialized skills catch all issues, scored report shows quality level, prioritized improvements with file:line refs, ready to fix efficiently.
Time saved: 2-4 hours of manual review + discovering issues after deployment.
Quality improvement: Systematic coverage ensures nothing is missed.
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.