🧩 Philosophy 5h ago · Florian_Dietz

Context Modification as a Negative Alignment Tax

Less Wrong
View Channel →
Source ↗ 👁 0 💬 0
Context Rot
Every LLM gets worse as its context grows. Chroma tested 18 frontier models and found performance degradation in all of them, often by double-digit percentages on tasks where short-context performance was strong. The industry calls this "context rot": the gradual degradation of response quality as irrelevant history accumulates in the context window.
The standard fix is compaction: when the context gets too long, summarize it and throw away the original. Claude Code auto-compacts at

Comments (0)

Sign in to join the discussion

More Like This

Asymmetry Between Defensive and Acquisitive Instrumental Deception
LessWrong · 4h ago
Best Intro AI X-Risk Resource?
LessWrong · 5h ago
📰
Sawtooth Problems
LessWrong · 10h ago
📰
Control Debt
LessWrong · 11h ago
📰
Could Frontier AI Researchers Collectively Slow the Race? A Conditional Pledge Mechanism
LessWrong · 13h ago
📰
The Goblins Are the Paperclips
LessWrong · 14h ago