Prompting vs. Fine-Tuning
Prompt engineering and fine-tuning represent fundamentally different approaches to adapting LLMs for specific tasks.
Fine-tuning modifies model parameters through additional training on task-specific data. It requires computational resources, labeled datasets, and ML expertise. The resulting model is specialized but its internal changes are opaque.
Prompt engineering operates on the model’s existing knowledge without parameter modification. It requires only natural language instructions. The model remains general-purpose; behavior changes are achieved entirely through input design.
The key tradeoff: fine-tuning can achieve higher task-specific performance but requires significant resources and creates a specialized artifact. Prompting is accessible and flexible but operates within the constraints of what the model already knows.
Related: 05-atom—prompt-engineering-definition, 05-atom—task-specificity-of-prompting