17.2 Architecture sketching with the model
Overview and links for this section of the guide.
On this page
- Goal: get options and tradeoffs, not “one true architecture”
- Inputs to provide (so architecture is grounded)
- Ask for 2–3 options with tradeoffs
- Define boundaries early (where code lives)
- Architecture + verification (tests and ship points)
- Copy-paste prompts
- Common anti-patterns (and fixes)
- Where to go next
Goal: get options and tradeoffs, not “one true architecture”
The model is great at generating plausible architectures. The risk is accepting the first plausible architecture as “the right one.”
Your goal is:
- generate a few viable designs,
- understand tradeoffs,
- choose one based on your constraints,
- keep it simple enough to implement quickly.
The model can propose; you choose. Choosing is how you keep architectural control.
Inputs to provide (so architecture is grounded)
Architecture prompts fail when inputs are vague. Provide:
- your one-page spec (goal, constraints, acceptance criteria),
- expected scale (solo tool vs public app),
- deployment assumptions (local only vs cloud),
- integration needs (databases, APIs, auth),
- non-goals (what you’re not doing yet).
Without these, the model will assume “production SaaS” and overbuild.
Ask for 2–3 options with tradeoffs
Always demand multiple options. For each option, ask for:
- what it optimizes for,
- what it sacrifices,
- what complexity it adds,
- what ship point it enables.
Also ask for a recommendation with rationale.
When you force multiple options, the model surfaces tradeoffs. Tradeoffs are where good architecture decisions live.
Define boundaries early (where code lives)
For AI apps, boundaries matter more than frameworks. A durable boundary set:
- entrypoint (CLI/HTTP),
- domain logic (pure functions),
- LLM adapter boundary (one module),
- schemas/prompts as versioned artifacts,
- tests.
If you lock these boundaries early, the rest can evolve.
Architecture + verification (tests and ship points)
Architecture must include a verification plan:
- what tests exist at SP2 and SP3,
- how to run the app and tests,
- where logging/error handling lives,
- how to keep diffs small (diff-only pattern).
If architecture has no verification story, it’s a demo, not an engineering plan.
Copy-paste prompts
Prompt A: architecture options
Here is my one-page spec:
(paste)
Task:
Propose 3 architecture options (A/B/C) for v1.
For each option include:
- Modules/layers (file map)
- Data flow diagram (text is fine)
- Tradeoffs (speed, complexity, cost, risk)
- What ship point it enables first
Then recommend one option and explain why.
No code yet.
Prompt B: convert chosen option into an implementation plan
We choose Option B.
Task:
Propose a 5–10 step implementation plan.
For each step:
- files created/changed
- verification (tests/commands)
- stop after the plan
Architecture prompts should end in a plan you approve, not in a giant code dump.
Common anti-patterns (and fixes)
Anti-pattern: overbuilding the architecture
Fix: restate constraints: “v1, minimal, single developer, ship walking skeleton first.” Ask for a simpler option.
Anti-pattern: framework selection dominates the conversation
Fix: focus on boundaries and contracts first. Frameworks are replaceable; boundaries are not.
Anti-pattern: no tradeoffs presented
Fix: demand 2–3 options with pros/cons. If you can’t compare, you can’t choose deliberately.