13.1 Turning a prototype into a repo

Overview and links for this section of the guide.

Goal: convert a “good run” into a real repo

AI Studio is great at producing “a working answer.” Your job is to turn that answer into a project you can maintain.

By the end of this page, you should have:

  • a folder on disk with a real file tree,
  • one command that runs the project,
  • one command that verifies behavior (tests or a repeatable check),
  • a commit that captures the first working state.
Why this step is non-negotiable

If you can’t run it locally, you can’t reliably iterate. Exporting is what makes the vibe loop reality-based.

What to export (the full artifact set)

When people “export,” they often copy code and forget everything else. That creates irreproducible systems. Export the full set:

1) Code

  • all source files,
  • tests (even minimal),
  • scripts/entrypoints.

2) Prompt artifacts

  • the prompt(s) that generated the working output,
  • any schemas you used (JSON schema, output formats),
  • house rules (diff-only, no new deps, etc.).

3) Settings snapshot

  • which model you used,
  • key generation settings (temperature, max output),
  • safety settings (at least note them),
  • any tool-calling settings if relevant.

4) A runbook (README)

  • how to run,
  • how to test,
  • what inputs are supported,
  • where to configure env vars.
Export should be reproducible by a teammate

A good export is “someone else can clone and run it” without needing your chat history.

Freeze the prototype (capture what mattered)

Before you touch your editor, freeze the prototype state so you can reproduce it later:

  • copy the final prompt into a file (even temporary),
  • record model + settings,
  • record the acceptance criteria you used to judge success,
  • record any known limitations (“doesn’t handle X yet”).
Don’t rely on memory

“I think I used that other prompt” is how you lose reproducibility. Save the prompt now.

Create the repo skeleton

Create a new folder and initialize git early. The repo is your safety net.

Minimum repo contents

  • README.md: run/test commands and purpose.
  • src/ (or app/): your code.
  • tests/: even a single test to start.
  • .gitignore: keep secrets and build artifacts out of git.

Example file tree templates

Pick a structure appropriate to your language. Keep it boring.

Example: Python CLI / small app

my-app/
  README.md
  .gitignore
  .env.example
  src/
    my_app/
      __init__.py
      __main__.py
      cli.py
      llm_client.py
      prompts/
        system.md
        summarize_v1.md
  tests/
    test_smoke.py

Example: Node.js CLI / small server

my-app/
  README.md
  .gitignore
  .env.example
  package.json
  src/
    index.ts
    llm/
      client.ts
      prompts/
        system.md
        summarize_v1.md
  tests/
    smoke.test.ts
What matters most

You don’t need the perfect structure. You need a structure that makes boundaries obvious: entrypoint, core logic, model-call adapter, prompts, tests.

Paste generated output into files (without drift)

This is where people accidentally break exports by “improving” while pasting. Don’t.

Rules for the paste step

  • Paste literally: create files exactly as generated.
  • Don’t refactor yet: first make it run.
  • Don’t add deps yet: if it needs deps you didn’t expect, treat that as a generation bug to fix deliberately later.
  • Keep a file map: note what each file does as you paste (1 line per file).

Integrity check (60 seconds)

  • search for eval/exec on user input,
  • search for secrets accidentally pasted,
  • confirm imports match file paths,
  • confirm the entrypoint exists.

Make it runnable (SP1)

SP1 is not “the code exists.” SP1 is “I can run it on demand.”

Define the run command

Your README should contain exactly one primary run command. Examples:

  • python -m my_app ...
  • node src/index.js ... or npm run dev
  • uvicorn my_app.api:app (if you’re building a web API)

Smoke test (manual)

Run the smallest input that should work. If it fails, treat that as your first debugging exercise:

  • capture the exact command and output,
  • ask the model for hypotheses + tests,
  • apply a minimal diff-only fix.
SP1 ends in a commit

As soon as it runs, commit. This is your “known-good” base for all future iterations.

Add minimal tests or checks (SP2)

SP2 is “we can detect regressions.” You have two options early:

  • Automated tests: preferred, even if it’s just one smoke test.
  • Repeatable manual checks: acceptable early, but convert to tests soon.

Minimum test set (practical)

For almost any app, start with:

  • 1–3 happy-path cases,
  • 1 invalid-input case,
  • 1 “must not crash” case,
  • (if using schemas) 1 “invalid output” case handled gracefully.

The goal is not comprehensive coverage; it’s creating a truth source you can iterate against.

Don’t skip tests because “the model wrote it”

AI output is faster to generate than it is to evaluate. Tests are what keep speed from turning into chaos.

Ship points and commits (how to stay safe)

Use commits as guardrails. A good ship-point commit is:

  • small enough to review,
  • green (tests pass),
  • described by a clear message (“SP1: runnable skeleton”).

A practical commit sequence

  1. SP0: repo skeleton (README, gitignore, empty src/tests).
  2. SP1: runnable skeleton (one command runs).
  3. SP2: minimal tests/checks (one command verifies).

This gives you stable checkpoints for refactors and feature work.

Quick review checklist (before you go on)

  • Can I run it from a clean terminal session?
  • Can I run verification (tests/checks) in one command?
  • Are secrets excluded from git (.env, credentials files)?
  • Is there a clear entrypoint and clear boundaries (core vs LLM adapter)?
  • Do I have the prompt(s) and settings saved as files?

Common pitfalls (and how to avoid them)

Pitfall: “It works in AI Studio but not locally”

Fix: treat environment as part of the spec. Pin runtime versions, capture env vars, and add a minimal “smoke test” you can run locally.

Pitfall: drift while pasting

Fix: paste literally, commit, then refactor. Don’t improvise during export.

Pitfall: secrets in code or prompts

Fix: env vars + secret management. Never paste keys into AI Studio conversations.

Pitfall: model calls scattered across the codebase

Fix: add a single wrapper module for model calls (next page: 13.3). This boundary is what keeps your app maintainable.

Copy-paste templates

Template: README “Run + Test” section

## Run

1) Create a virtual environment / install deps (if any)
2) Set required environment variables (see `.env.example`)
3) Run:

```sh
[one command to run]
```

## Test

```sh
[one command to run tests]
```

Template: 1-line-per-file map

File map:
- src/.../cli.py: parses args and prints output
- src/.../core.py: pure business logic
- src/.../llm_client.py: the ONLY place that calls the model API
- src/.../prompts/: prompt files + versions
- tests/: regression + smoke tests

Where to go next