Prompt Engineering Taxonomy by Purpose

The Framework

Organize prompting techniques not by mechanism, but by the problem they solve. When the field is cluttered with techniques, classification by purpose reveals which tool fits which job.

Application Categories

CategoryProblem AddressedExample Techniques
New Task AdaptationApply model to unseen tasks without trainingZero-shot, Few-shot
Reasoning EnhancementImprove multi-step logical reasoningCoT, ToT, GoT, Self-Consistency
Hallucination ReductionGround responses in external knowledgeRAG, ReAct, CoVe, CoN
User Intent ClarificationHandle ambiguous or poorly-framed queriesRaR (Rephrase and Respond)
Code GenerationGenerate executable, correct codePoT, SCoT, CoC, Scratchpad
Self-ImprovementEnable model to refine its own outputsSelf-Refine, Self-Consistency
OptimizationImprove prompts themselves algorithmicallyAPE, OPRO
MetacognitionStep back and abstract before answeringStep-Back Prompting

Why This Matters

Most practitioners face a problem, not a mechanism preference. “My model hallucinates” leads to RAG or CoVe. “My model can’t do multi-step math” leads to CoT or PoT. Starting from the problem surfaces the right family of solutions.

How to Apply

  1. Identify the failure mode (hallucination? reasoning errors? wrong task framing?)
  2. Find the category that addresses that failure
  3. Select technique based on constraints (compute budget, latency, implementation complexity)

Limitations

Categories aren’t mutually exclusive, many techniques combine purposes. RAG addresses hallucination and enables new tasks. CoT improves reasoning and reduces some errors. The taxonomy is a navigation aid, not a partition.

Related: 05-atom—prompts-as-behavior-not-knowledge, 05-atom—reasoning-structure-shapes-capability