Prompt Engineering Taxonomy by Purpose
The Framework
Organize prompting techniques not by mechanism, but by the problem they solve. When the field is cluttered with techniques, classification by purpose reveals which tool fits which job.
Application Categories
| Category | Problem Addressed | Example Techniques |
|---|---|---|
| New Task Adaptation | Apply model to unseen tasks without training | Zero-shot, Few-shot |
| Reasoning Enhancement | Improve multi-step logical reasoning | CoT, ToT, GoT, Self-Consistency |
| Hallucination Reduction | Ground responses in external knowledge | RAG, ReAct, CoVe, CoN |
| User Intent Clarification | Handle ambiguous or poorly-framed queries | RaR (Rephrase and Respond) |
| Code Generation | Generate executable, correct code | PoT, SCoT, CoC, Scratchpad |
| Self-Improvement | Enable model to refine its own outputs | Self-Refine, Self-Consistency |
| Optimization | Improve prompts themselves algorithmically | APE, OPRO |
| Metacognition | Step back and abstract before answering | Step-Back Prompting |
Why This Matters
Most practitioners face a problem, not a mechanism preference. “My model hallucinates” leads to RAG or CoVe. “My model can’t do multi-step math” leads to CoT or PoT. Starting from the problem surfaces the right family of solutions.
How to Apply
- Identify the failure mode (hallucination? reasoning errors? wrong task framing?)
- Find the category that addresses that failure
- Select technique based on constraints (compute budget, latency, implementation complexity)
Limitations
Categories aren’t mutually exclusive, many techniques combine purposes. RAG addresses hallucination and enables new tasks. CoT improves reasoning and reduces some errors. The taxonomy is a navigation aid, not a partition.
Related: 05-atom—prompts-as-behavior-not-knowledge, 05-atom—reasoning-structure-shapes-capability