mjunaidca

fetching-library-docs

19
9
# Install this skill:
npx skills add mjunaidca/mjs-agent-skills --skill "fetching-library-docs"

Install specific skill from multi-skill repository

# Description

|

# SKILL.md


name: fetching-library-docs
description: |
Token-efficient library API documentation fetcher using Context7 MCP with 77% token savings.
Fetches code examples, API references, and usage patterns for published libraries (React,
Next.js, Prisma, etc). Use when users ask "how do I use X library", need code examples,
want API syntax, or are learning a framework's official API. Triggers: "Show me React hooks",
"Prisma query syntax", "Next.js routing API". NOT for exploring repo internals/source code
(use researching-with-deepwiki) or local files.


Context7 Efficient Documentation Fetcher

Fetch library documentation with automatic 77% token reduction via shell pipeline.

Quick Start

Always use the token-efficient shell pipeline:

# Automatic library resolution + filtering
bash scripts/fetch-docs.sh --library <library-name> --topic <topic>

# Examples:
bash scripts/fetch-docs.sh --library react --topic useState
bash scripts/fetch-docs.sh --library nextjs --topic routing
bash scripts/fetch-docs.sh --library prisma --topic queries

Result: Returns ~205 tokens instead of ~934 tokens (77% savings).

Standard Workflow

For any documentation request, follow this workflow:

1. Identify Library and Topic

Extract from user query:
- Library: React, Next.js, Prisma, Express, etc.
- Topic: Specific feature (hooks, routing, queries, etc.)

2. Fetch with Shell Pipeline

bash scripts/fetch-docs.sh --library <library> --topic <topic> --verbose

The --verbose flag shows token savings statistics.

3. Use Filtered Output

The script automatically:
- Fetches full documentation (934 tokens, stays in subprocess)
- Filters to code examples + API signatures + key notes
- Returns only essential content (205 tokens to Claude)

Parameters

Basic Usage

bash scripts/fetch-docs.sh [OPTIONS]

Required (pick one):
- --library <name> - Library name (e.g., "react", "nextjs")
- --library-id <id> - Direct Context7 ID (faster, skips resolution)

Optional:
- --topic <topic> - Specific feature to focus on
- --mode <code|info> - code for examples (default), info for concepts
- --page <1-10> - Pagination for more results
- --verbose - Show token savings statistics

Mode Selection

Code Mode (default): Returns code examples + API signatures

--mode code

Info Mode: Returns conceptual explanations + fewer examples

--mode info

Common Library IDs

Use --library-id for faster lookup (skips resolution):

React:      /reactjs/react.dev
Next.js:    /vercel/next.js
Express:    /expressjs/express
Prisma:     /prisma/docs
MongoDB:    /mongodb/docs
Fastify:    /fastify/fastify
NestJS:     /nestjs/docs
Vue.js:     /vuejs/docs
Svelte:     /sveltejs/site

Workflow Patterns

Pattern 1: Quick Code Examples

User asks: "Show me React useState examples"

bash scripts/fetch-docs.sh --library react --topic useState --verbose

Returns: 5 code examples + API signatures + notes (~205 tokens)

Pattern 2: Learning New Library

User asks: "How do I get started with Prisma?"

# Step 1: Get overview
bash scripts/fetch-docs.sh --library prisma --topic "getting started" --mode info

# Step 2: Get code examples
bash scripts/fetch-docs.sh --library prisma --topic queries --mode code

Pattern 3: Specific Feature Lookup

User asks: "How does Next.js routing work?"

bash scripts/fetch-docs.sh --library-id /vercel/next.js --topic routing

Using --library-id is faster when you know the exact ID.

Pattern 4: Deep Exploration

User needs comprehensive information:

# Page 1: Basic examples
bash scripts/fetch-docs.sh --library react --topic hooks --page 1

# Page 2: Advanced patterns
bash scripts/fetch-docs.sh --library react --topic hooks --page 2

Token Efficiency

How it works:

  1. fetch-docs.sh calls fetch-raw.sh (which uses mcp-client.py)
  2. Full response (934 tokens) stays in subprocess memory
  3. Shell filters (awk/grep/sed) extract essentials (0 LLM tokens used)
  4. Returns filtered output (205 tokens) to Claude

Savings:
- Direct MCP: 934 tokens per query
- This approach: 205 tokens per query
- 77% reduction

Do NOT use mcp-client.py directly - it bypasses filtering and wastes tokens.

Advanced: Library Resolution

If library name fails, try variations:

# Try different formats
--library "next.js"    # with dot
--library "nextjs"     # without dot
--library "next"       # short form

# Or search manually
bash scripts/fetch-docs.sh --library "your-library" --verbose
# Check output for suggested library IDs

Verification

Run: python3 scripts/verify.py

Expected: βœ“ fetch-docs.sh ready

If Verification Fails

  1. Run diagnostic: ls -la scripts/fetch-docs.sh
  2. Check: Script exists and is executable
  3. Fix: chmod +x scripts/fetch-docs.sh
  4. Stop and report if still failing - do not proceed with downstream steps

Troubleshooting

Issue Solution
Library not found Try name variations or use broader search term
No results Use --mode info or broader topic
Need more examples Increase page: --page 2
Want full context Use --mode info for explanations
Permission denied Run: chmod +x scripts/*.sh

References

For detailed Context7 MCP tool documentation, see:
- references/context7-tools.md - Complete tool reference

Implementation Notes

Components (for reference only, use fetch-docs.sh):
- mcp-client.py - Universal MCP client (foundation)
- fetch-raw.sh - MCP wrapper
- extract-code-blocks.sh - Code example filter (awk)
- extract-signatures.sh - API signature filter (awk)
- extract-notes.sh - Important notes filter (grep)
- fetch-docs.sh - Main orchestrator (ALWAYS USE THIS)

Architecture:
Shell pipeline processes documentation in subprocess, keeping full response out of Claude's context. Only filtered essentials enter the LLM context, achieving 77% token savings with 100% functionality preserved.

Based on Anthropic's "Code Execution with MCP" blog post.

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.