timbenniks

sdk-readiness-audit

0
0
# Install this skill:
npx skills add timbenniks/timbenniks-agent-skills --skill "sdk-readiness-audit"

Install specific skill from multi-skill repository

# Description

Audit an API surface (OpenAPI 3.0/3.1, GraphQL schema, or REST docs) for SDK readiness and developer experience. Use when asked to evaluate whether an API is SDK friendly, produce a readiness scorecard, list concrete refactors, describe "if we shipped an SDK today" pain points, or suggest OpenAPI fixes and x-* extensions to improve client generation.

# SKILL.md


name: sdk-readiness-audit
description: Audit an API surface (OpenAPI 3.0/3.1, GraphQL schema, or REST docs) for SDK readiness and developer experience. Use when asked to evaluate whether an API is SDK friendly, produce a readiness scorecard, list concrete refactors, describe "if we shipped an SDK today" pain points, or suggest OpenAPI fixes and x-* extensions to improve client generation.


SDK readiness audit

Audit whether an API is actually SDK friendly and actionable. Do not generate an SDK. Diagnose gaps and provide concrete fixes.

Scope

  • Accept OpenAPI (URL or local file), GraphQL schema (SDL or introspection), or REST docs (links or markdown)
  • Produce a readiness scorecard, prioritized refactors, SDK pain points, and spec fix suggestions
  • Avoid guessing; mark unknowns and request missing inputs

Mandatory intake questions (ask in one concise block)

  1. Source of truth
  2. OpenAPI URL or local path, GraphQL SDL/introspection, or REST docs link/markdown
  3. Target SDK consumers
  4. Primary languages or platforms (if any)
  5. Primary use cases or top workflows
  6. Auth and environments
  7. Auth methods, token types, and environments (prod/sandbox)
  8. Known pain points
  9. Any current client friction or support issues

If the user already provided answers, restate and confirm.

Workflow

  1. Load inputs
  2. For OpenAPI, parse: servers, security, tags, paths, components, schemas
  3. For GraphQL, parse: types, inputs, enums, connections, directives, deprecations
  4. For REST docs, build an endpoint inventory table before scoring
  5. Build a surface inventory
  6. Endpoints/operations and their purpose
  7. Request and response shapes
  8. Auth, pagination, errors, versioning, rate limits
  9. Evaluate with the rubric
  10. Score each category 0 to 5
  11. Cite concrete evidence (endpoint names, schema fields, headers)
  12. Produce outputs
  13. Scorecard
  14. Refactors with priority
  15. "If we shipped an SDK today" pain points
  16. Suggested OpenAPI fixes and x-* extensions
  17. Write the audit file
  18. Save the full output to sdk-readiness-audit.md
  19. Call out unknowns
  20. List missing or ambiguous areas that block full confidence

Scoring rubric (0 to 5)

Score each category. Use "unknown" if evidence is missing.

0 = missing or harmful
1 = inconsistent or ad hoc
3 = workable but rough for SDKs
5 = strong and SDK friendly

Categories (weighted):

  • Auth and environments (weight 2)
  • Errors and error model (weight 2)
  • Pagination and collection design (weight 2)
  • Naming and resource model
  • Consistency and conventions
  • Data model quality (types, required/optional, enums, nullability)
  • Filtering, sorting, and search
  • Versioning and stability
  • Idempotency and safety semantics
  • Long running operations and async jobs
  • Rate limits and retries
  • Documentation and examples
  • SDK metadata readiness (operationId, tags, schema names)

Overall score (0 to 100):

  • Weighted average * 20
  • If any critical category (auth, errors, pagination) is <= 2, cap overall at 59 and label "not ready"

Output format (required)

Write the full output to sdk-readiness-audit.md. In chat, provide a brief summary and point to the file.

  1. Readiness verdict
  2. Ready / Borderline / Not ready
  3. Overall score

  4. SDK readiness scorecard

  5. Table with category, score, evidence, and brief notes

  6. Concrete refactors needed

  7. Prioritized list with P0/P1/P2
  8. Each item includes: current issue, why it hurts SDKs, proposed fix

  9. If we shipped an SDK today, here is what would hurt

  10. Short bullet list focused on developer friction

  11. Suggested OpenAPI fixes and x-* extensions

  12. Provide specific fixes and optional vendor extensions
  13. Use small YAML snippets when helpful

  14. Unknowns and requested follow ups

  15. Only if needed

OpenAPI fixes and x-* extensions (guidance)

Suggest fixes that improve client generation and developer experience. Examples:

  • Normalize operationId or provide x-sdk-method-name
  • Group operations with tags or x-sdk-group
  • Define consistent error schema (Problem Details or equivalent)
  • Standardize pagination and document in x-pagination
  • Mark idempotent operations with x-idempotency
  • Mark retryable errors with x-retryable
  • Add examples and x-examples per operation
  • Clarify rate limit headers with x-rate-limit

Keep extensions minimal and consistent. Do not invent semantics that conflict with the spec.

GraphQL specific checks

  • Prefer consistent connection-based pagination for lists
  • Avoid unbounded lists without pagination args
  • Use input objects for mutations
  • Prefer enums over freeform strings
  • Provide clear deprecations
  • Document nullability and error behavior

REST docs specific checks

  • Build an explicit endpoint inventory first
  • Identify missing details (auth, error schema, pagination, versioning)
  • Propose a minimal OpenAPI skeleton to close gaps

Acceptance criteria

Output is correct only if:

  • All intake questions were asked or confirmed
  • Evidence is cited for each score
  • Refactors are concrete and actionable
  • Pain points are clearly stated
  • OpenAPI fixes or x-* extensions are suggested where relevant
  • Unknowns are explicitly listed when information is missing
  • sdk-readiness-audit.md was written with the full audit

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.