g6000

mvp

0
0
# Install this skill:
npx skills add g6000/ultrafast-mvp-skills --skill "mvp"

Install specific skill from multi-skill repository

# Description

Ultra-fast MVP delivery playbook for coding agents (Next.js + Vercel + Supabase; optional Vercel Workflow jobs)

# SKILL.md


name: mvp
description: Ultra-fast MVP delivery playbook for coding agents (Next.js + Vercel + Supabase; optional Vercel Workflow jobs)


mvp

Instructions for the agent to follow when this skill is activated.

When to use

Use this skill when you want to ship an MVP as fast as possible with minimal operations burden for an individual developer or a small startup, typically using Next.js + Vercel + Supabase (Postgres/Auth), with an optional Vercel Workflow (WDK) + jobs table pattern when long-running jobs exist (or are likely to exist).

Instructions

skills/mvp/SKILL.md — Ultra-fast MVP Implementation Playbook (for Coding Agents)

This document summarizes “broadly agreeable” principles and an agent execution procedure for individual developers and small startups to minimize ops and ship an MVP quickly to run a learning loop.

Assumptions:
- Humans primarily do success-criteria definition, review, and change requests.
- Coding Agents produce deliverables by looping: questions → success criteria → tests first → implementation.
- A “plain” Postgres database is sufficient (assume Supabase).
- Deployment assumes Vercel.


1) Decide upfront which path to choose (Decision: A / B)

A. No “long-running jobs”

Choose A if all of the following are true:
- Almost all user actions complete within a request/response.
- No heavy file processing or long external API chains.
- Durability requirements (retry/resume) are not mandatory.

Fastest stack (A)
- Next.js + Vercel
- Supabase (Postgres + Auth)
- PR-review workflow assuming Preview Deployments
- Use Vercel Logs / Observability as the minimal operations foundation

B. Long-running jobs exist (or are likely soon)

Choose B if any of the following are true:
- Multi-step calls occur with OpenAI API, etc. (failures / rate limits / retries are realistic).
- File processing or integrations can take tens of seconds to minutes.
- You need “progress,” “resume,” “retry,” or “duplicate execution prevention.”
- You do not want to trap processing inside a single HTTP request.

Fastest stack (B) = A + additions
- Use Vercel Workflow (WDK) to separate execution from the request path
- Add a jobs table in Postgres for state management (idempotency / re-run / failure reason / progress)

Rule: If you are unsure, choose B.


2) Core principles for ultra-fast deployment (broadly agreeable)

  1. Don’t build v1—ship v0
  2. Prioritize “the smallest shape that lets you learn from users” over completeness.

  3. Maximize learning rate

  4. The most important loop under uncertainty is ship → measure → fix.

  5. Build only what you need now (No optionality)

  6. Abstractions/extensibility/generalization “for someday” often becomes debt in an MVP.

  7. Ops isn’t about eliminating ops; it’s about making it minimally sufficient

  8. Avoid becoming “impossible to debug” (ensure minimal logging/observability from day one).

  9. Fix a short iteration cycle

  10. Example: clarify requirements → tests → implementation → preview → review → merge.

  11. Tests are not the enemy of speed; they are the foundation of sustained speed

  12. Test only the critical path, prevent regressions, and keep shipping fast.

3) 2026: Agent-first implementation flow (Declarative development + TDD)

3.1 Roles

  • Human (you): define goals/constraints/success criteria and decide direction via review.
  • Coding Agent: asks questions to eliminate ambiguity, writes tests, and implements until passing.

3.2 How to communicate goals (most important)

Do not give “how.” Give completion criteria (“what must be true”).

Bad:
- “Implement it this way using this library.”

Good:
- “When the user performs this action, this DB row is created, it appears in this list, and users without permission cannot see it.”

3.3 Mandatory loop (agents must always operate in this order)

1) Questions: confirm unknowns with the minimum number of questions (if answers are missing, propose a default and ask for confirmation).
2) Success criteria: enumerate in verifiable form (Acceptance Criteria / Definition of Done).
3) Tests first: write tests that fail first (minimal set for the critical path).
4) Implementation: the smallest implementation that makes tests pass (no extra features).
5) Verification: run tests / typecheck / lint and provide evidence.
6) Real behavior check in Preview: confirm via user flow.
7) PR review → merge.


4) Questions the agent should ask first (highest leverage order)

Rule: Fewer questions, deeper. If the answer is ambiguous, present options and a recommended default, then confirm.

4.1 Product / UX

  1. Who is the target user? (persona)
  2. What is the v0 happy path? (in 3–7 steps)
  3. What are the top 1–3 most important actions?
  4. How can “success” be observed? (created/sent/generated, etc.)
  5. How should failures be shown? (user message, retry path)

4.2 Data model

  1. What entities (tables) are needed? (e.g., Project/Document/Message/Job)
  2. What fields are required vs optional?
  3. What uniqueness constraints exist? (e.g., slug, (user_id, name))
  4. Do you need auditing? (created_at/updated_at, author, history)

4.3 Authentication & authorization (Supabase)

  1. Is authentication required for v0? (no / yes)
  2. If yes, which method? (email+password / magic link / OAuth)
  3. Is “default: user can access only their own data” acceptable? (usually yes)

4.4 Long-running jobs (for B)

  1. What starts a job? (UI action / API / Cron)
  2. What is the job output? (DB row / file / UI output)
  3. How should progress be represented? (0–100 / step name / log)
  4. Which failures are retryable? (temporary external API failures, etc.)
  5. How should duplicate execution be handled? (dedupe same input into same job, or allow separate jobs)
  6. Is cancel required? (no / yes)

4.5 Non-functional

  1. Expected scale? (private beta / public)
  2. Cost ceiling? (LLM spend, DB, bandwidth)
  3. Compliance requirements? (if none, say “none”)

5) Success criteria (Acceptance Criteria) template

Before implementation, the agent must always write these and request Human review.

5.1 Example success criteria

  • When the user submits Form X, exactly one row is created in Table Y and it appears in List Z.
  • Unauthenticated users cannot access /app and are redirected to /login.
  • Even with duplicate submissions, side effects (billing/generation/etc.) do not happen twice (idempotent).

5.2 Definition of Done (DoD)

  • [ ] The main user flow is protected by automated tests
  • [ ] tests / typecheck / lint all pass
  • [ ] The user flow is verified in a Preview Deployment
  • [ ] Failure logging is sufficient (root cause can be investigated)

6) Test strategy (MVP: minimal tests to preserve speed)

Principle: Do not test everything. Protect only the parts that would be fatal if they break.

Recommended minimal set:
1. Integration tests (API/DB/auth)
- Example: create → fetch → authorization, duplicate submission (idempotency)
2. E2E tests (1–3 total)
- Example: login → main action → show result (shortest path)
3. (For B) Job state transition tests
- queued → running → succeeded/failed
- retries must not increase side effects

Criteria for writing tests:
- User flows directly tied to user value
- Expensive (LLM/billing) or irreversible actions (delete/publish)
- Authorization/security boundaries


7) Agent implementation rules (anti-LLM bloat)

  1. Minimum implementation: do not add features that were not requested.
  2. Do not increase external dependencies: add new libraries only if they clearly reduce cost/complexity.
  3. Surgical changes: no unrelated refactors (you may propose, but do not do).
  4. Make assumptions explicit: if uncertain, state assumptions and ask for confirmation.
  5. Verification defines completion: provide evidence (test output/logs/steps).

8) Stack A (no long-running jobs) implementation guide

8.1 Routing / implementation basics

  • UI (App Router) → Route Handler / Server Action → supabase-js (server-side)
  • Secrets (Service Role, etc.) must be server-side only

8.2 Preview Deployments + PR workflow

  • Make all changes via PR, verify in Preview, then merge
  • PR description must include: “Success criteria,” “DoD,” and “How to verify”

8.3 Minimal ops (observability)

  • Attach a requestId to failure logs
  • Return safe messages to users; put details into logs

9) Stack B (long-running jobs) implementation guide (WDK + jobs table)

9.1 Baseline design (required)

  • User actions must create a job and return immediately (return jobId)
  • The Workflow receives jobId and processes step-by-step
  • Save progress/results/failure reason in jobs; UI shows it

9.2 jobs table (minimal columns example)

  • id (uuid)
  • user_id (uuid)
  • type (text) e.g., "summarize", "ingest_file"
  • status (text) enum-like: queued | running | succeeded | failed | canceled
  • progress (int) 0–100 (or step name)
  • input (jsonb) main inputs (can be hashed or a reference)
  • result (jsonb) output (if large, store elsewhere or reference storage)
  • error_code (text), error_message (text)
  • idempotency_key (text, unique) critical: prevents duplicate execution
  • created_at, updated_at, started_at, finished_at

State transitions (example)
- queued → running → succeeded
- queued → running → failed (store error)
- queued/running → canceled

9.3 Idempotency (required)

  • Duplicate UI submissions / retries / Workflow retries must not increase side effects
  • Define idempotency_key from the input (user_id + params); same key maps to the same job

9.4 Retry strategy

  • Retryable: external API 5xx / network / rate limit, etc.
  • Not retryable: invalid input, invalid permissions, specification failures
  • Always store failures in error_* and logs

9.5 UI

  • Poll jobs by jobId (this is sufficient initially)
  • Provide UI states for success/failure/canceled

10) Deploy & operations (Vercel-centered minimal process)

Make PR the unit of completion:
1. Create PR
2. Verify in Preview
3. Provide evidence that key tests pass
4. Review → merge

Minimum ops (do not skip)
- Always log critical processing (requestId / jobId)
- Put rate limits/caps on expensive APIs (LLM, etc.)
- Do not allow “it breaks with no clear cause” states


11) Prisma (remove it if you can: default NO)

In the fastest configuration of this playbook, Prisma is not required.

Why to avoid Prisma for fastest MVP

  • Adds dependency and operational surface area (schema generation/migrations)
  • Supabase-centric approach (RLS/type generation/REST) is often enough

Criteria to adopt Prisma (if adopting, make the purpose explicit)

  • Server-side DB operations are complex (transactions, complex relations, analytics)
  • You want consistent app-layer authorization over RLS

Rules if you adopt it
- Do not replace everything at once
- Adopt incrementally from the server layer
- Migrate only when tests already exist


12) “Copy-paste instructions” template for agents

Paste the following at the start of a task.

You are a Coding Agent implementing an MVP. Before writing code:
1) Use this SKILL.md “Question list” to confirm unknowns with the minimum number of questions.
2) Write verifiable Acceptance Criteria and DoD.
3) Write failing tests first.
Then implement only the minimum required to make tests pass—no extra features, no over-abstraction.
Keep changes surgical. Make assumptions explicit and ask for confirmation when needed.
Finally, provide test/typecheck/lint outputs and the verification steps in Preview.

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.