6.1 Ask for a plan before code

Overview and links for this section of the guide.

Why “plan first” works

LLMs are eager. If you ask for code immediately, the model has to commit to an approach while requirements are still fuzzy. That’s how you get:

  • the wrong architecture,
  • hidden assumptions,
  • extra features you didn’t want,
  • big rewrites that are hard to review.

Asking for a plan first flips the sequence:

  1. Surface assumptions before they become code.
  2. Choose boundaries before files get entangled.
  3. Define verification (tests/checks) before implementation drifts.
  4. Control scope before it explodes.
Plan-first is a control mechanism

You are not asking for a plan because the model “needs help thinking.” You’re asking because you need a reviewable artifact before you approve changes.

When you should (and shouldn’t) ask for a plan

Ask for a plan when:

  • the task touches multiple files,
  • you’re refactoring,
  • you’re adding a feature with edge cases,
  • you’re debugging a complex failure,
  • you care about keeping the diff small.

You can skip the plan when:

  • it’s a one-line fix,
  • you’re doing a mechanical change (rename, formatting),
  • the constraints and acceptance criteria are already extremely clear.
A simple heuristic

If you can’t confidently predict what files will change, ask for a plan.

What a good plan looks like for code

A useful plan is specific enough to review, but not so detailed it becomes fake certainty. A good plan usually includes:

  • Goal recap: one sentence confirming the intended outcome.
  • Assumptions / questions: what it needs to know or is assuming.
  • Steps (3–8): ordered, small, each step produces a checkable intermediate state.
  • Files in scope: which files will be edited/added.
  • Verification: which tests/commands prove the step worked.
  • Risks: what could break and how you’ll detect it.
Red flag: “we’ll just rewrite it”

If the plan proposes large rewrites for a small change, stop. Ask for a smaller plan with incremental diffs.

A plan-first prompt template

Use this template when you want a plan and nothing else (no code yet):

I want to make a change to a codebase.

Task:
[Describe the change in 1–3 sentences.]

Constraints:
- Language/runtime: [...]
- Dependencies: [...]
- Style: [...]
- Scope: only touch these files: [...]
- Non-scope: do not change [...]

Acceptance criteria:
- [...]

Process rules:
1) First, ask clarifying questions if needed.
2) Then propose a plan with 3–8 steps.
3) For each step: list files touched and how I verify it (tests/commands).
4) Stop after the plan and wait for confirmation. Do not write any code yet.
Why the “stop” instruction matters

Models often “helpfully” jump into implementation. The stop instruction forces a planning checkpoint so you can steer before code is produced.

How to evaluate a plan (fast)

Use this quick review checklist:

  • Scope: does it stay inside the requested change?
  • Boundaries: does it preserve module responsibilities (no random coupling)?
  • Dependencies: does it introduce new libraries unnecessarily?
  • Verification: does each step include a way to prove it worked?
  • Reversibility: are changes incremental (easy to rollback) or monolithic?
  • Risk awareness: does it identify likely failure points?
The best plans mention tests early

If the plan doesn’t mention how to verify correctness until the end, ask it to restructure: “add tests first” or “verify after each step.”

How to execute a plan without losing control

Once you approve a plan, execute it in small loops:

  1. Ask for the diff for step 1 only.
  2. Apply it and run the verification for step 1.
  3. Only then proceed to step 2.

This keeps context small and prevents “accidental rewrites.”

A repeatable execution prompt

“Implement step 1 only. Output diff-only changes. Don’t touch anything else. After the diff, list what I should run to verify.”

Common failure modes (and fixes)

Failure: the plan is vague

Fix: ask for more specificity: “List exact files touched and exact verification commands per step.”

Failure: the plan is too big

Fix: reduce scope or split the plan: “Rewrite this plan to produce a working intermediate after step 1.”

Failure: the model ignores the plan-first instruction and writes code

Fix: restate the rule and ask again: “Stop. I only want a plan and questions. No code.”

Failure: the plan is wrong (bad architecture)

Fix: treat it like a design review: point out the constraint it violates and ask for an alternative plan with tradeoffs.

Worked example: a small feature plan

Example task: “Add a --repl mode to my CLI calculator without changing existing behavior.”

A good plan might look like:

  1. Clarify CLI contract: confirm existing argument mode + exit codes. (Verify: run current tests.)
  2. Make CLI testable: refactor main to accept injected streams if needed. (Verify: new unit test for existing behavior.)
  3. Add REPL loop: implement read/eval/print loop with exit/quit. (Verify: unit test with io.StringIO.)
  4. Handle errors in REPL: invalid expressions print error and continue. (Verify: test invalid input doesn’t exit early.)
  5. Update README: document REPL usage and examples. (Verify: manual run.)

Notice what makes it good: it’s incremental, verification is built in, and it doesn’t touch the parser unless necessary.

Where to go next