0.5 A note on UI changes & product naming
Overview and links for this section of the guide.
On this page
Why this matters
AI products evolve quickly. Buttons move, labels change, features get renamed, and “the right place” to do something shifts over time. If you treat the UI as the truth, you’ll constantly feel lost. If you treat the concepts as the truth, UI changes become minor navigation problems.
Learn the underlying concepts (models, prompts, context, structured output, tools, exports). Then use the UI as one possible way to access those concepts.
What will change (and what won’t)
Things that commonly change
- Navigation: where settings live, what menus are called, and which screen is “home.”
- Defaults: which model is preselected, safety toggles, temperature/top-p controls.
- Terminology: “tools” vs “function calling,” “structured output” vs “JSON mode,” etc.
- Export paths: how you move from a playground prompt to code you can run.
Things that are usually stable
- The workflow: prompt → output → run → verify → refine.
- The primitives: messages/instructions, context windows, sampling controls, and tool execution.
- The engineering constraints: tests, error handling, secrets hygiene, and observability.
If a screenshot or label doesn’t match, search the UI for the concept name (e.g. “temperature”, “structured output”, “tools”). The concept is the target; the UI is just a route.
Product naming: a practical map
Google’s ecosystem has multiple surfaces that can sound similar. Here’s a simple, practical way to think about it:
- Google AI Studio: a fast place to prototype prompts, structured outputs, tool calls, and multimodal inputs—optimized for iteration.
- Google Cloud / Vertex AI: the “production platform” side: projects, IAM, quotas, deployments, monitoring, and enterprise workflows.
When something is renamed, the most important question is: “Does this change the capabilities or just the UI wording?” Most of the time it’s wording.
How to adapt when the UI doesn’t match
When a page in this guide refers to a control or screen you can’t find, do this:
- Search by intent: look for the concept (model selector, safety, temperature, JSON/structured output, tools).
- Check the mode: you might be in the wrong workflow (chat vs structured output vs tool calling).
- Look for “advanced” toggles: many controls are hidden behind an expand/collapse.
- Confirm you’re signed into the right account: org accounts can restrict features.
- Try a minimal reproduction: start a fresh session and see if the feature appears there.
When you notice a mismatch, write a 1–2 line note for yourself: what the guide called it vs what the UI calls it now. You’ll save time the next session.
Is it a bug, a mismatch, or a permissions issue?
If something is missing or behaving strangely, classify it before you burn time:
- Mismatch: the UI moved/renamed. You can still do the thing somewhere else.
- Permissions/quota: you see the feature sometimes, or it fails intermittently, or it’s disabled for your account/project.
- Bug: the UI shows the control, but it doesn’t apply; or behavior contradicts the visible settings.
Quick triage checklist
- Try in a new session with a minimal prompt.
- Try a different model (if available) to rule out model-specific limits.
- Check for quota/limit errors and retry behavior.
- Remove “extras” (tools, images, huge context) to isolate the failure.
Habits that make you UI-change-proof
- Write prompts like specs: the UI can change, but your spec stays stable.
- Keep small, runnable artifacts: export to code early; treat the playground as temporary.
- Version what matters: prompts, schemas, and eval cases (so changes are measurable).
- Prefer verification over trust: if a control is unclear, test the behavior.
Assume the UI will drift. Build a workflow that survives that drift: small steps, exported code, and verification.