The Prompt Report: A Systematic Survey of Prompt Engineering Techniques
Citation: Schulhoff, S., Ilie, M., Balepur, N., et al. (2024). The Prompt Report: A Systematic Survey of Prompt Engineering Techniques. arXiv:2406.06608.
Institution: University of Maryland (lead), with collaborators from OpenAI, Microsoft, Stanford, Princeton, Vanderbilt, and others.
Summary
A comprehensive 80+ page survey representing the most extensive systematic review of prompt engineering to date. The authors analyzed 1,500+ academic papers to create a structured taxonomy of prompting techniques and standardized vocabulary for the field.
Key Contributions
- 33 vocabulary terms defining the components and concepts of prompting
- 58 text-based prompting techniques organized into 6 categories
- 40 multimodal prompting techniques (image, audio, video, 3D)
- Meta-analysis of natural language prefix-prompting literature
- Benchmarking of prompting techniques against ChatGPT using MMLU dataset
- Case study comparing human vs. automated prompt optimization
Core Framing
The paper addresses a maturity gap: prompt engineering has become widespread practice before developing shared terminology and systematic understanding. The authors argue this creates “conflicting terminology and a fragmented ontological understanding of what constitutes an effective prompt.”
Extracted Content
Atoms
- 05-atom—in-context-learning-definition
- 05-atom—prompt-component-taxonomy
- 05-atom—six-factors-of-exemplar-effectiveness
- 05-atom—ai-vs-human-prompt-optimization
- 05-atom—few-shot-cot-superiority
- 05-atom—self-consistency-underwhelms
Molecules
Notes
This paper exemplifies valuable taxonomic work in an emerging field. The framing itself contains a transferable insight: when practice outpaces systematization, ontological work creates disproportionate value by enabling shared vocabulary and productive comparison.