Context-Minimization Pattern
Context & MemoryUser Input ──▶ [Transform] ──▶ Safe Output
│ │
│ ┌────┴────┐
└────────▶│ REMOVE │
│ tainted │
└─────────┘
Context: [████ tainted ████] → [██ clean ██]
sql = LLM("to SQL", user_prompt)
remove(user_prompt) # tainted tokens gone
rows = db.query(sql)
answer = LLM("summarize", rows) # clean context
User-supplied text lingers in context, enabling it to influence later generations and potentially inject malicious instructions
Purge untrusted segments after transforming into safe intermediate. Later reasoning sees only trusted data
- Customer service chat
- Medical Q&A systems
- Multi-turn flows where input shouldn't steer later steps
Pros
- Simple, no extra models
- Prevents prompt injection
Cons
- Loses conversational nuance
- May hurt UX if too aggressive