Home/
Part VII — Multimodal & Long Context (Where AI Studio Gets Spicy)/22. Working With Documents and Large Text/22.4 Handling contradictions and multiple versions
22.4 Handling contradictions and multiple versions
Overview and links for this section of the guide.
On this page
Goal: detect and handle conflicts safely
Real document sets contain contradictions:
- old docs vs new docs,
- policy vs implementation,
- different teams documenting the same behavior differently.
In long-context systems, contradictions cause the model to “average” sources and produce a confident but wrong synthesis. Your job is to surface conflicts explicitly.
Conflict-aware behavior is a feature
“I found conflicting sources” is the correct output in many cases. Don’t force a single answer when the sources disagree.
Why contradictions happen
Common causes:
- Version drift: behavior changed but docs didn’t.
- Different scopes: policy describes global behavior; implementation has environment exceptions.
- Ambiguous language: “should” vs “must” vs “usually.”
- Partial updates: one doc updated, another not.
A conflict-handling workflow
- Detect: require the model to list conflicting statements when they exist.
- Localize: show which chunks contain the conflict (with quotes).
- Classify: is it a version conflict, scope conflict, or ambiguity?
- Decide: apply a policy (see below) or ask a clarifying question.
- Record: add a note to a “contradiction ledger” so the conflict is handled consistently.
Decision policies (what to do when sources disagree)
Pick a policy and encode it in prompts:
- Recency policy: prefer newer versions when version metadata is available.
- Authority policy: prefer sources marked “canonical” (e.g., official spec) over informal docs.
- Implementation policy: prefer what the code/tests do (if the codebase is the source of truth).
- Escalation policy: when conflicts are high-impact, require human decision.
The important thing is consistency: the same conflict should not produce different answers across runs.
Copy-paste prompts
Prompt: answer with conflict detection
Answer the question using ONLY the provided chunks.
Rules:
- If sources conflict, do not guess. Report the conflict.
- Quote the conflicting statements and include chunk ids.
- If needed, ask 1 clarifying question to resolve the conflict.
Return JSON:
{
"answer": string|null,
"conflicts": [{
"topic": string,
"source_a": { "chunk_id": string, "quote": string },
"source_b": { "chunk_id": string, "quote": string },
"type": "version" | "scope" | "ambiguity" | "unknown",
"suggested_resolution": string
}],
"follow_up_question": string|null
}