Vatsal & Dubey 2024
A Survey of Prompt Engineering Methods in Large Language Models for Different NLP Tasks
Citation
Vatsal, S., & Dubey, H. (2024). A Survey of Prompt Engineering Methods in Large Language Models for Different NLP Tasks. arXiv preprint arXiv:2407.12994.
Core Framing
The survey organizes prompt engineering knowledge by NLP task rather than by technique family or application domain. This task-based categorization reveals that no single prompting method dominates across all tasks, optimal technique selection depends on task characteristics.
Scope: 44 research papers, 39 prompting techniques, 29 NLP tasks.
Key Contribution
Unlike prior surveys that use broad application categories (which conflate multiple NLP tasks) or focus on narrow subsets, this work provides granular task-specific analysis showing which techniques work best for which tasks.
Extracted Content
Atoms
- 05-atom—prompt-engineering-definition
- 05-atom—task-specificity-of-prompting
- 05-atom—prompting-vs-finetuning
- 05-atom—llm-decomposition-vs-computation
- 05-atom—shortform-accuracy-advantage
Molecules
- 05-molecule—chain-of-thought-prompting
- 05-molecule—tool-delegation-pattern
- 05-molecule—metacognitive-prompting
- 05-molecule—chain-of-verification
- 05-molecule—prompting-techniques-by-task-type
Related Sources
- Wei et al. 2022 - Chain-of-Thought Prompting
- Gao et al. 2022 - Program-Aided Language Models
- Wang et al. 2023 - Metacognitive Prompting
- Dhuliawala et al. 2023 - Chain-of-Verification