Implement GitOps workflows with ArgoCD and Flux for automated, declarative Kubernetes...
npx skills add pingqLIN/skill-0
Or install specific skill: npx add-skill https://github.com/pingqLIN/skill-0
# Description
just a general classification program
# SKILL.md
Skill-0 Tool Portal
π§ Complete guide to skill decomposition workflow and toolchain
Quick Start
Installation
git clone https://github.com/pingqLIN/skill-0.git
cd skill-0
pip install -r requirements.txt
First Run
# Index existing skills
python -m src.vector_db.search --db db/skills.db --parsed-dir data/parsed index
# Search for skills
python -m src.vector_db.search search "document processing"
# Analyze patterns
python src/tools/analyzer.py -p data/parsed -o data/analysis/report.json
Tool Suite Overview
skill-0/src/tools/
βββ analyzer.py # π Statistical analysis
βββ pattern_extractor.py # π Pattern discovery
βββ evaluate.py # β
Coverage evaluation
βββ batch_parse.py # π Batch processing
skill-0/src/vector_db/
βββ embedder.py # π§ Embedding generation
βββ vector_store.py # πΎ SQLite-vec storage
βββ search.py # π Semantic search CLI
1. Analyzer Tool
Purpose
Generate comprehensive statistics about parsed skills
Usage
# Basic analysis
python src/tools/analyzer.py
# Custom paths
python src/tools/analyzer.py -p data/parsed -o data/analysis/report.json
# With text report
python src/tools/analyzer.py -t
Output Structure
{
"summary": {
"total_skills": 32,
"total_actions": 266,
"total_rules": 84,
"total_directives": 120
},
"action_types": {
"io_read": 124,
"io_write": 90,
"transform": 28,
...
},
"directive_types": {
"completion": 45,
"knowledge": 30,
"principle": 20,
...
},
"skills": [ /* per-skill breakdown */ ]
}
Use Cases
- Project health monitoring
- Coverage verification
- Pattern identification
- Before/after comparisons
2. Pattern Extractor
Purpose
Discover common patterns across skills for reuse and standardization
Usage
# Extract patterns
python src/tools/pattern_extractor.py
# Custom output
python src/tools/pattern_extractor.py -o data/analysis/patterns.json
Pattern Types
Action Combinations
Frequently occurring action sequences
{
"pattern_type": "action_combination",
"actions": ["io_read", "transform", "io_write"],
"frequency": 15,
"example_skills": ["docx-skill", "pdf-skill", "xlsx-skill"]
}
Directive Usage
Common directive patterns
{
"pattern_type": "directive_usage",
"directive_types": ["completion", "constraint"],
"usage_context": "Document processing",
"frequency": 8
}
Structure Patterns
Element ratio patterns
{
"pattern_type": "structure",
"ratio": "3:1:2",
"elements": "actions:rules:directives",
"category": "Data processing"
}
Use Cases
- Template creation
- Best practice identification
- Duplicate detection
- Framework evolution
3. Evaluation Tool
Purpose
Assess framework coverage and identify gaps
Usage
# Evaluate coverage
python src/tools/evaluate.py -p data/parsed
# Detailed report
python src/tools/evaluate.py -p data/parsed -o data/analysis/evaluation.json
Metrics
- Action Type Coverage: % of action types used
- Directive Type Coverage: % of directive types used
- Completeness Score: Overall decomposition quality
- Pattern Diversity: Variety in skill structures
Output
{
"coverage": {
"action_types": {
"total": 8,
"used": 8,
"percentage": 100
},
"directive_types": {
"total": 6,
"used": 6,
"percentage": 100
}
},
"gaps": [],
"recommendations": [
"Add more constraint-type directives",
"Increase rule diversity in condition types"
]
}
4. Batch Parser
Purpose
Parse multiple skills efficiently with consistent formatting
Usage
# Parse directory
python src/tools/batch_parse.py -i input_skills/ -o data/parsed/
# With validation
python src/tools/batch_parse.py -i input_skills/ -o data/parsed/ --validate
# Dry run
python src/tools/batch_parse.py -i input_skills/ --dry-run
Input Format
Accepts various formats:
- Markdown skill definitions
- JSON pre-formatted
- Plain text descriptions (requires LLM)
Features
- Schema validation
- ID auto-increment
- Duplicate detection
- Parallel processing
5. Vector Search System
Purpose
Semantic search and clustering for skill discovery
Setup
# One-time indexing
python -m src.vector_db.search --db db/skills.db --parsed-dir data/parsed index
Commands
Search by Query
python -m src.vector_db.search search "creative design tools"
Output:
π Searching for: creative design tools
--------------------------------------------------
1. Canvas-Design Skill (53.36%)
2. Theme Factory (46.14%)
3. Pptx Skill (45.08%)
Find Similar Skills
python -m src.vector_db.search similar "Docx Skill"
Output:
π Finding skills similar to: Docx Skill
--------------------------------------------------
1. Xlsx Skill (87.23%)
2. Pdf Skill (82.14%)
3. Txt File Skill (76.89%)
Cluster Analysis
python -m src.vector_db.search cluster -n 5
Output:
π Clustering 32 skills into 5 groups...
--------------------------------------------------
Cluster 1: Development Tools (10 skills)
- MCP Server, Testing Framework, ...
Cluster 2: Document Processing (5 skills)
- PDF Skill, DOCX Skill, ...
Statistics
python -m src.vector_db.search stats
Output:
π Skill Database Statistics
--------------------------------------------------
Total Skills: 32
Indexed Skills: 32
Embedding Dimension: 384
Database Size: 1.73 MB
Last Updated: 2026-01-28
Python API
from src.vector_db import SemanticSearch
# Initialize
search = SemanticSearch(db_path='db/skills.db')
# Search
results = search.search("PDF processing", limit=5)
for r in results:
print(f"{r['name']}: {r['similarity']:.2%}")
# Find similar
similar = search.find_similar("Docx Skill", limit=5)
# Cluster
clusters = search.cluster_skills(n_clusters=5)
Workflow Examples
Adding a New Skill
Step 1: Create JSON
cp data/parsed/template.json data/parsed/my-skill.json
# Edit my-skill.json with your decomposition
Step 2: Validate
python src/tools/analyzer.py -p data/parsed/my-skill.json
Step 3: Index
python -m src.vector_db.search index
Step 4: Verify
python -m src.vector_db.search search "my skill description"
Analyzing a Skill Category
Step 1: Filter Skills
python -m src.vector_db.search search "document processing" > doc_skills.txt
Step 2: Extract Patterns
python src/tools/pattern_extractor.py -p data/parsed/ -o patterns_doc.json
Step 3: Compare
python src/tools/analyzer.py -p data/parsed/ -t > comparison.txt
Batch Migration
Step 1: Prepare Source
# Organize skills in input/
ls input/
# skill1.md skill2.md skill3.json
Step 2: Batch Parse
python src/tools/batch_parse.py -i input/ -o data/parsed/ --validate
Step 3: Re-index
python -m src.vector_db.search index
Step 4: Evaluate
python src/tools/evaluate.py -p data/parsed
Performance Tips
Large Datasets
- Use
--batch-sizefor batch operations - Enable parallel processing with
-jflag - Pre-filter with
--filterpatterns
Memory Optimization
- Index incrementally for >100 skills
- Use
--checkpointfor long operations - Clear cache between major operations
Search Optimization
- Cache frequent queries
- Use clustering for categorization
- Limit results with
--limit
Common Patterns
Document Processing Skills
Pattern: io_read β transform β io_write
Elements: 3-5 actions, 1-2 rules, 2-3 directives
Directives: completion, constraint
API Integration Skills
Pattern: external_call β state_check β transform
Elements: 2-4 actions, 2-3 rules, 1-2 directives
Directives: strategy, knowledge
Creative Tools
Pattern: await_input β llm_inference β io_write
Elements: 4-6 actions, 1 rule, 3-4 directives
Directives: preference, principle
Troubleshooting
Issue: Schema Validation Fails
# Check schema version
grep schema_version data/parsed/your-skill.json
# Validate manually
python -c "
import json, jsonschema
schema = json.load(open('schema/skill-decomposition.schema.json'))
data = json.load(open('data/parsed/your-skill.json'))
jsonschema.validate(data, schema)
"
Issue: Embeddings Out of Date
# Re-index everything
python -m src.vector_db.search index --force
# Check stats
python -m src.vector_db.search stats
Issue: Pattern Extraction Slow
# Use sampling
python src/tools/pattern_extractor.py --sample-size 20
# Parallel processing
python src/tools/pattern_extractor.py -j 4
Integration Examples
With GitHub Actions
name: Validate Skills
on: [push]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install deps
run: pip install -r requirements.txt
- name: Validate
run: python src/tools/analyzer.py -p data/parsed
With Pre-commit Hook
# .git/hooks/pre-commit
#!/bin/bash
python src/tools/analyzer.py -p data/parsed || exit 1
python src/tools/evaluate.py -p data/parsed || exit 1
With CI/CD Pipeline
# In your CI script
python src/tools/batch_parse.py -i new_skills/ -o data/parsed/ --validate
python -m src.vector_db.search index
python src/tools/evaluate.py -p data/parsed > coverage_report.txt
Resources
Documentation
- CLAUDE.md - Claude-specific best practices
- reference.md - Complete schema reference
- examples.md - Example decompositions
Tools
- analyzer.py - Source code
- pattern_extractor.py - Source code
- search.py - Source code
Support
- Issues: https://github.com/pingqLIN/skill-0/issues
- Discussions: https://github.com/pingqLIN/skill-0/discussions
Last updated: 2026-01-28
# README.md
Skill-0: Skill Decomposition Parser
A ternary classification system for parsing the internal structure of Claude Skills and MCP Tools
Overview
Skill-0 is a classification system that parses AI/Chatbot Skills (especially Claude Skills and MCP Tools) into structured components. It includes semantic search powered by vector embeddings for intelligent skill discovery.
Ternary Classification System
Organizes and defines the immutable parts of a Skill (or parts that change behavior when modified) into three categories:
| Category | Definition | Characteristics |
|---|---|---|
| Action | Atomic operation: indivisible basic operation | Deterministic result, no conditional branching, atomic |
| Rule | Atomic judgment: pure conditional evaluation/classification | Returns boolean/classification result |
| Directive | Descriptive statement: decomposable but chosen not to at this level | Contains completion state, knowledge, principles, constraints, etc. |
Directive Types
| Type | Description | Example |
|---|---|---|
completion |
Completion state description | "All tables extracted" |
knowledge |
Domain knowledge | "PDF format specification" |
principle |
Guiding principle | "Optimize Context Window" |
constraint |
Constraint condition | "Max 25,000 tokens" |
preference |
Preference setting | "User prefers JSON format" |
strategy |
Strategy guideline | "Retry three times on error" |
Directive Provenance (Optional)
Skills/Tools may come from diverse sources where the original intent cannot be fully verified. To preserve the original spirit, a Directive can optionally include provenance in two tiers:
basic: minimal traceability + verbatim excerptfull: adds location + extraction/translation metadata (backend can encode based on this)
Basic
"provenance": {
"level": "basic",
"source": { "kind": "mcp_tool", "ref": "example-tool" },
"original_text": "Prefer concise output"
}
Full
"provenance": {
"level": "full",
"source": { "kind": "claude_skill", "ref": "converted-skills/docx/SKILL.md", "version": "v1" },
"original_text": "Keep changes minimal",
"location": { "locator": "SKILL.md#L120" },
"extraction": { "method": "llm", "inferred": true, "confidence": 0.7 }
}
ID Format
| Element | Pattern | Example |
|---|---|---|
| Action | a_XXX |
a_001, a_002 |
| Rule | r_XXX |
r_001, r_002 |
| Directive | d_XXX |
d_001, d_002 |
Project Structure
Installation
# Clone the repository
git clone https://github.com/pingqLIN/skill-0.git
cd skill-0
# Install dependencies
pip install -r requirements.txt
# Index skills (first time)
python -m src.vector_db.search --db db/skills.db --parsed-dir data/parsed index
Testing
The project includes a comprehensive test suite for tool and code equivalence verification:
# Run all tests
python3 -m pytest tests/ -v
# Run specific test categories
python3 -m pytest tests/test_helper.py::TestSkillValidator -v
python3 -m pytest tests/test_helper.py::TestIntegrationWorkflows -v
Test Coverage: 32 tests covering:
- β Schema validation (tool equivalence)
- β Format conversion (code equivalence)
- β Execution path testing
- β Template generation
- β Error handling
- β Integration workflows
See tests/README.md for detailed test documentation.
Semantic Search
Skill-0 includes a powerful semantic search engine powered by all-MiniLM-L6-v2 embeddings and SQLite-vec.
CLI Commands
# Index all skills
python -m src.vector_db.search --db db/skills.db --parsed-dir data/parsed index
# Search by natural language
python -m src.vector_db.search --db db/skills.db search "PDF document processing"
# Find similar skills
python -m src.vector_db.search --db db/skills.db similar "Docx Skill"
# Cluster analysis (auto-grouping)
python -m src.vector_db.search --db db/skills.db cluster -n 5
# Show statistics
python -m src.vector_db.search --db db/skills.db stats
Search Examples
$ python -m src.vector_db.search search "creative design visual art"
π Searching for: creative design visual art
--------------------------------------------------
1. Canvas-Design Skill (53.36%)
2. Theme Factory (46.14%)
3. Anthropic Brand Styling (45.54%)
4. Slack GIF Creator (45.44%)
5. Pptx Skill (45.08%)
Search completed in 72.6ms
Python API
from src.vector_db import SemanticSearch
# Initialize search engine
search = SemanticSearch(db_path='db/skills.db')
# Semantic search
results = search.search("PDF processing", limit=5)
for r in results:
print(f"{r['name']}: {r['similarity']:.2%}")
# Find similar skills
similar = search.find_similar("Docx Skill", limit=5)
# Cluster analysis
clusters = search.cluster_skills(n_clusters=5)
Quick Example
{
"decomposition": {
"actions": [
{
"id": "a_001",
"name": "Read PDF",
"action_type": "io_read",
"deterministic": true
}
],
"rules": [
{
"id": "r_001",
"name": "Check File Exists",
"condition_type": "existence_check",
"returns": "boolean"
}
],
"directives": [
{
"id": "d_001",
"name": "PDF Processing Complete",
"directive_type": "completion",
"description": "All tables extracted and saved to Excel",
"decomposable": true
}
]
}
}
Statistics (32 Skills)
| Metric | Count |
|---|---|
| Skills | 32 |
| Actions | 266 |
| Rules | 84 |
| Directives | 120 |
| Action Type Coverage | 100% |
| Directive Type Coverage | 100% |
Cluster Distribution
| Cluster | Skills | Description |
|---|---|---|
| 1 | 10 | Development Tools (MCP, Testing) |
| 2 | 5 | Document Processing (PDF, DOCX) |
| 3 | 7 | Creative Design (Canvas, Theme) |
| 4 | 2 | Data Analysis (Excel, Raffle) |
| 5 | 8 | Research Assistant (Leads, Resume) |
Performance
| Metric | Value |
|---|---|
| Index Time | 0.88s (32 skills) |
| Search Latency | ~75ms |
| Embedding Dimension | 384 |
| Database | SQLite-vec |
Documentation
Comprehensive documentation is available:
- CLAUDE.md - Best practices for Claude AI integration and skill decomposition
- SKILL.md - Complete tool portal and workflow guide
- reference.md - Schema reference and format specifications
- examples.md - 7 detailed skill examples across different domains
- AGENTS.md - Guidelines for AI agents working on this project
- scripts/helper.py - Helper utilities for validation, conversion, and testing
- vision-agent-alternatives.md - Free & open-source vision-agent alternatives guide (δΈζη)
Agent-Lightning Inspired Enhancements β‘
Skill-0 now includes architectural patterns inspired by Microsoft's Agent-Lightning project:
- agent-lightning-comparison.md - Comprehensive technical comparison between the two projects
- agent-lightning-enhancements.md - Usage guide for new distributed features
- Coordination Layer - Central hub for distributed task management (like LightningStore)
- Parser Abstraction - Unified interface for different parsing strategies (like Algorithm abstraction)
- Worker Pool - Parallel execution of skill processing tasks (like Runners)
Quick Example - Distributed Parsing:
from src.coordination import SkillStore, SkillWorker
from src.parsers import AdvancedSkillParser
# Initialize coordination store
store = SkillStore(db_path="db/coordination.db")
# Enqueue tasks
for skill_path in skill_files:
await store.enqueue_parse_task(skill_path)
# Create worker pool (4 parallel workers)
parser = AdvancedSkillParser()
workers = [SkillWorker(f"worker-{i}", store, parser) for i in range(4)]
# Process in parallel - 4x speedup!
await asyncio.gather(*[w.run() for w in workers])
See examples/distributed_parsing.py for a complete working example.
Quick Start Guide
# Generate a new skill template
python src/tools/helper.py template -o my-skill.json
# Convert markdown to skill JSON
python src/tools/helper.py convert skill.md my-skill.json
# Validate skill against schema
python src/tools/helper.py validate my-skill.json
# Test execution paths
python src/tools/helper.py test my-skill.json --analyze
See docs/helper-test-results.md for detailed test results and examples.
Version
- Schema Version: 2.2.0
- Project Version: 2.5.0
- Created: 2026-01-23
- Updated: 2026-02-02
- Author: pingqLIN
Changelog
v2.5.0 (2026-02-02) - Agent-Lightning Inspired Enhancements β‘
- New Feature: Distributed skill processing architecture inspired by Microsoft Agent-Lightning
- Coordination Layer (
src/coordination/): Central SkillStore hub for task management - Parser Abstraction (
src/parsers/): Unified SkillParser interface for extensibility - Worker Pool: Parallel execution with SkillWorker for 4x speedup
- Technical Comparison: 17KB comprehensive analysis document comparing Agent-Lightning and Skill-0 architectures
- Documentation: Usage guide and working examples for distributed parsing
- Test Suite: 9 comprehensive tests validating all new components (100% passing)
- Performance: 4x faster parallel processing with 4 workers
- Scalability: Foundation for distributed, horizontal scaling
v2.4.0 (2026-01-30) - GitHub Skills Discovery & Resource Dependencies
- Schema Update: v2.1.0 β v2.2.0
- Added
resource_dependencydefinition type with 8 resource categories - Resources can be defined at meta (global) and action levels
- Support for database, API, filesystem, GPU, memory, credentials, network, environment
- Includes specification details, fallback strategies, and required flags
- GitHub Skills Search: Discovered 75+ repositories aligning with skill-0 goals
- Top 30 projects documented (MCP servers, Claude skills, AI frameworks)
- MCP ecosystem: 4,509 repositories found
- Top repository: awesome-mcp-servers (79,994 β)
- License analysis and compatibility verification
- New Documentation:
docs/github-skills-search-report.md- Comprehensive search reportdocs/github-skills-search-results.json- Structured project dataexamples/database-query-analyzer-with-resources.json- Resource exampletools/github_skill_search.py- GitHub search utility
v2.3.0 (2026-01-28) - Testing & Quality Assurance
- New Feature: Comprehensive automated test suite
- 32 tests covering all helper utilities
- Tool equivalence verification (validator consistency)
- Code equivalence verification (converter determinism)
- Integration workflow testing
- Error handling and edge case coverage
- Test fixtures and documentation in
tests/ - pytest configuration in
pyproject.toml - CI/CD ready test infrastructure
v2.2.0 (2026-01-28) - Documentation & Tooling
- New Feature: Comprehensive documentation suite
CLAUDE.md- Claude best practices guideSKILL.md- Complete tool portal and workflowreference.md- Full schema referenceexamples.md- 7 detailed skill examplesAGENTS.md- AI agent guidelines- New Tool:
scripts/helper.py- Utility for validation, conversion, and testing - Template generation
- Markdown to JSON conversion
- Schema validation
- Execution path testing
- Complexity analysis
- Integration with agents.md format standard
- Test results documentation in
docs/helper-test-results.md
v2.1.0 (2026-01-26) - Stage 2
- New Feature: Semantic search with vector embeddings
vector_dbmodule with SQLite-vec integrationall-MiniLM-L6-v2embedding model (384 dimensions)- K-Means clustering for skill grouping
- CLI tool:
python -m vector_db.search - Expanded to 32 skills (+21 from awesome-claude-skills)
- Performance: 0.88s indexing, ~75ms search
v2.0.0 (2026-01-26)
- Breaking Change: Redefined ternary classification
core_actionβaction(ID:ca_XXXβa_XXX)missionβdirective(ID:m_XXXβd_XXX)- Added
directive_typesupport: completion, knowledge, principle, constraint, preference, strategy - Added
decomposableanddecomposition_hintfields - Added
action_type:await_input - Schema structure optimization
- Added 19 new skills from ComposioHQ/awesome-claude-skills
v1.1.0 (2026-01-23)
- Initial version
Related Projects
Hue-Sync: LG OLED TV Smart Lighting Sync Application
Note: The LG OLED TV Hue Sync project documentation has been moved to a separate repository for better organization.
π¦ Files Moved: 3 documentation files (~66.5KB) related to developing a Philips Hue Sync-like smart lighting application for LG OLED TVs.
π Transfer Documentation: See docs/TRANSFER_TO_HUE_SYNC_REPO.md for:
- Complete file list and metadata
- Step-by-step transfer instructions (bilingual)
- Recommended repository structure
- Sample README content
π Quick Reference: docs/QUICK_REFERENCE_HUE_SYNC_TRANSFER.md
Status: Documentation complete, ready for manual repository creation and file transfer.
License
MIT
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.