A Systematic Survey of Prompt Engineering in Large Language Models

Citation

Sahoo, P., Singh, A.K., Saha, S., Jain, V., Mondal, S., & Chadha, A. (2024). A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications. arXiv:2402.07927.

Framing

The paper positions itself as addressing a gap: despite extensive literature on prompt engineering, there’s no systematic organization by application purpose. The core contribution isn’t cataloging techniques, it’s the taxonomy itself, organized around what problems each technique solves.

This framing choice is itself instructive: when the field lacks structure, the most valuable work may be classification rather than innovation.

Key Insight

Prompt engineering has evolved from simple instructions to sophisticated reasoning structures. The evolution follows a geometric progression: linear chains (CoT) → branching trees (ToT) → full graphs (GoT). Each structural advance enables different types of reasoning.

Coverage

41+ distinct prompting techniques organized by application area:

  • New tasks without extensive training (zero/few-shot)
  • Reasoning and logic (CoT, ToT, GoT, self-consistency, etc.)
  • Reduce hallucination (RAG, ReAct, CoVe)
  • User interaction (Active-Prompt)
  • Fine-tuning and optimization (APE)
  • Knowledge-based reasoning (ART)
  • Consistency and coherence (CCoT)
  • Managing emotions and tone (Emotion Prompting)
  • Code generation (PoT, SCoT, CoC)
  • Optimization and efficiency (OPRO)
  • Understanding user intent (RaR)
  • Metacognition and self-reflection (Step-Back)

Extracted Content

Atoms

Molecules

Notes

The paper is a survey, not original research, so the value is in synthesis and organization. The taxonomy diagram (Fig. 2) is a useful reference artifact. Table 1 provides model/dataset/metric mappings for each technique.

Limitation: Published Feb 2024, revised Mar 2025, some newer techniques may not be covered.