Learning Is Forgetting; LLM Training as Lossy Compression
quality 2/10 · low quality
0 net
AI Summary
This ICLR 2026 paper frames large language model training as lossy compression, demonstrating that LLMs learn optimal compressions of training data for next-sequence prediction that approach Information Bottleneck theoretical bounds. The work shows that compression quality and structure can predict downstream benchmark performance across different model families, providing an information-theoretic framework for understanding LLM learning and representational spaces.
Tags
Entities
ICLR 2026
Henry Conklin
Tom Hosking
Tan Yi-Chern
Jonathan D. Cohen
Sarah-Jane Leslie
Thomas L. Griffiths
Max Bartolo
Seraphina Goldfarb-Tarrant
OpenReview
Learning is Forgetting; LLM Training As Lossy Compression | OpenReview Go to ICLR 2026 Conference homepage Learning is Forgetting; LLM Training As Lossy Compression Henry Conklin , Tom Hosking , Tan Yi-Chern , Jonathan D. Cohen , Sarah-Jane Leslie , Thomas L. Griffiths , Max Bartolo , Seraphina Goldfarb-Tarrant Published: 26 Jan 2026, Last Modified: 08 Mar 2026 ICLR 2026 Poster Everyone Revisions BibTeX CC BY 4.0 Keywords : Compression, Information Theory, Learning, Generalisation, LLMs, Interpretability TL;DR : LLMs learn an optimal compression of the internet. Abstract : Despite the increasing prevalence of large language models (LLMs), we still have a limited understanding of how their representational spaces are structured. This limits our ability to interpret how and what they learn or relate them to learning in humans. We argue LLMs are best seen as an instance of lossy compression, where over training they learn by retaining only information in their training data relevant to their objective(s). We show pre-training results in models that are optimally compressed for next-sequence prediction, approaching the Information Bottleneck bound on compression. Across an array of open weights models, each compresses differently, likely due to differences in the data and training recipes used. However even across different families of LLMs the optimality of a model's compression, and the information present in it, can predict downstream performance on across a wide array of benchmarks, letting us directly link representational structure to actionable insights about model performance. In the general case the work presented here offers a unified Information-Theoretic framing for how these models learn that is deployable at scale. Primary Area : foundation or frontier models, including LLMs Submission Number : 21536 Loading OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors . © 2026 OpenReview