rapyuta-robotics

writing-plans

2
0
# Install this skill:
npx skills add rapyuta-robotics/agent-ai --skill "writing-plans"

Install specific skill from multi-skill repository

# Description

Use when you have a spec or requirements for a multi-step task, before touching code

# SKILL.md


name: writing-plans
description: Use when you have a spec or requirements for a multi-step task, before touching code


Writing Plans

Overview

Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.

Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.

Announce at start: "I'm using the writing-plans skill to create the implementation plan."

Context: This should be run in a dedicated worktree (created by brainstorming skill).

Save plans to: docs/design/{feature-name}/plan.md

Read first: docs/design/{feature-name}/agent.spec.md - the agent-facing technical specification with file references and code patterns.

Before Writing the Plan

  1. Identify affected locations - Use search tools to find ALL code that will need changes
  2. Find relevant tests - Locate existing test files for affected areas
  3. Check existing test coverage - Run tests with coverage on affected areas to understand baseline

Bite-Sized Task Granularity

Each step is one action (2-5 minutes):
- "Write the failing test" - step
- "Run it to make sure it fails" - step
- "Implement the minimal code to make the test pass" - step
- "Run the tests and make sure they pass" - step
- "Commit" - step

Plan Document Header

Every plan MUST start with this header:

# [Feature Name] Implementation Plan

> **For Claude:** REQUIRED SUB-SKILL: Use executing-plans to implement this plan task-by-task.

**Goal:** [One sentence describing what this builds]

**Architecture:** [2-3 sentences about approach]

**Tech Stack:** [Key technologies/libraries]

---

Task Structure

### Task N: [Component Name]

**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`

**Step 1: Write the failing test**

```python
def test_specific_behavior():
    result = function(input)
    assert result == expected

Step 2: Run test to verify it fails

Run: pytest tests/path/test.py::test_name -v
Expected: FAIL with "function not defined"

Step 3: Write minimal implementation

def function(input):
    return expected

Step 4: Run test to verify it passes

Run: pytest tests/path/test.py::test_name -v
Expected: PASS

Step 5: Commit

git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"

```

Remember

  • Exact file paths always
  • Complete code in plan (not "add validation")
  • Exact commands with expected output
  • Reference relevant skills with @ syntax
  • DRY, YAGNI, TDD, frequent commits
  • Mark tasks as PARALLEL or SERIAL - identify which tasks have no dependencies and can run concurrently
  • Only suggest subagents for truly parallel work - if tasks must be done sequentially, don't dispatch subagents

โš ๏ธ MANDATORY: Review Checkpoint

After writing the plan, you MUST prompt the user:

Implementation plan is ready for review.

Please review docs/design/{feature-name}/plan.md and either:
1. Accept - Reply "approved" or "lgtm" to proceed
2. Edit - Modify the file directly, then reply "updated" so I can re-evaluate

I will not proceed until you explicitly accept.

If user says "updated" or indicates they edited:
1. Re-read the modified plan
2. Summarize what changed
3. Ask for confirmation again

If user says "approved" / "lgtm" / accepts:
1. Proceed to Execution Handoff (user commits when ready)

Execution Handoff

After plan is approved, offer execution choice:

Plan approved and committed.

Task Dependencies:
- PARALLEL tasks (can be done concurrently): [list tasks]
- SERIAL tasks (must be done in order): [list tasks]

Execution options:

  1. Sequential execution (recommended for serial tasks) - Execute tasks one by one in this session
  2. Subagent-Driven (only if parallel tasks exist) - Dispatch subagents for independent tasks

Which approach?

If Subagent-Driven chosen:
- Only use if tasks are truly independent - no shared state, no sequential dependencies
- REQUIRED SUB-SKILL: Use the subagent-driven-development skill
- Stay in this session
- Fresh subagent per task + code review

If Sequential chosen:
- Execute tasks in order directly
- No subagent overhead for dependent work

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.