In-Context Learning (ICL)

The capability of large language models to perform new tasks based solely on examples and instructions provided within the prompt, without retraining or fine-tuning the model’s weights.

First identified in Brown et al. (2020), ICL represents one of AI’s most powerful yet poorly understood emergent capabilities. The model learns the task specification from the prompt context itself, no gradient updates occur during inference.

ICL encompasses several prompting approaches: zero-shot (instructions only), few-shot (examples provided), and various hybrid techniques that combine examples with reasoning demonstrations.

Related: 05-atom—few-shot-cot-superiority, 05-atom—prompt-component-taxonomy, 05-molecule—exemplar-design-principles