Home/AI Glossary/Fine-tuning

Fine-tuning

Fine-tuning updates model weights with a smaller, targeted dataset after pretraining. It can produce more consistent output in your tone of voice or domain terminology, but it requires high-quality data and can be more expensive than strong prompt engineering.

Alternatives such as instruction prompts, few-shot examples, and retrieval (fetching documents before response) often reduce the need for fine-tuning. Related terms: LLM, AI model, and prompt engineering.


Key characteristics

  • Changes model weights with domain-specific training data instead of only changing prompts.
  • Fits use cases needing stable tone, formatting, or specialized answers across many requests.
  • Is costlier and riskier than good prompting when data is small, biased, or poorly curated.