42.2 When fine-tuning hurts (rapidly changing knowledge)

Overview and links for this section of the guide.

Knowledge Injection

Do not fine-tune to teach the model facts.

If you fine-tune on "Our CEO is Alice," and then Bob becomes CEO, you have to re-train the model ($$$). If you use RAG, you just update the document in the database ($0).

The Hallucination Trap

Fine-tuned models are confident liars. If you train them on a medical dataset, they will invent diseases that sound real. They learn the "sound" of the data, not the truth.

Where to go next