LLMs Follow Patterns Better Than Instructions

LLMs tend to perform better when shown patterns to follow rather than given explicit instructions to execute, and this effect correlates with prompt length.

For knowledge extraction tasks, prompts that include a schematic representation of the target structure plus a concrete example of correct extraction outperform prompts that just explain what to extract. The pattern serves as an implicit specification that the model can match, rather than rules it must interpret and apply.

This has practical implications for prompt design: show, don’t just tell. Include examples of the desired output format alongside any instructions.

Related: 05-atom—few-shot-cot-superiority, 05-atom—context-window-limitations, 05-atom—llm-approximate-knowledge-base