Prompts Modify Behavior, Not Knowledge

Prompt engineering enhances model efficacy without modifying core model parameters. Prompts elicit desired behaviors from knowledge that already exists in the model.

This is fundamentally different from training or fine-tuning. Fine-tuning shifts what patterns a model prioritizes. Prompting activates or steers existing capabilities.

The distinction matters for understanding what’s possible: you can’t prompt a model into knowing something it doesn’t know, but you can prompt it into using what it knows differently.

Related: [None yet]