Two-Stage Modular Prompting

Context

You need an LLM to work with a large, complex structured input, an ontology, schema, codebase, or document set, where providing everything at once produces poor results.

Problem

LLMs struggle with sprawling inputs that exceed their effective processing capacity. Even within context window limits, performance degrades as relevant information gets buried in irrelevant context. Simply truncating or summarizing loses critical detail.

Solution

Split the task into two stages:

Stage 1: Module Selection Present the LLM with a list of named, conceptually coherent modules and the task objective. Ask it to identify which modules are relevant.

Stage 2: Focused Execution
Provide only the selected modules and ask for the actual output. The narrower, conceptually coherent context enables better performance.

Why It Works

The first stage leverages LLM capability at categorization and relevance assessment, tasks where broad context helps. The second stage provides focused context where conceptual coherence improves pattern matching and reduces distraction from irrelevant material.

The key is that modules aren’t arbitrary chunks. They’re conceptually bounded units that match how experts think about the domain. This alignment between human mental models and prompt structure appears to significantly improve LLM performance.

Consequences

Benefits:

  • Dramatic accuracy improvements on complex tasks (95% vs. “unusable” in benchmark testing)
  • Works within existing context window limits
  • Generalizable across different task types

Costs:

  • Requires pre-existing modular structure (or effort to create one)
  • Adds latency from multiple LLM calls
  • Module selection stage can introduce errors if modules are poorly defined

When to Apply

  • Complex ontology alignment or mapping
  • Large codebase analysis
  • Multi-document synthesis
  • Schema transformation tasks
  • Any structured task where full context degrades performance

Related: 06-atom—conceptual-module, 05-atom—context-window-limitations, 05-atom—llm-approximate-knowledge-base