bug-bounty419
xss283
google256
microsoft216
facebook191
apple139
rce133
malware101
exploit96
bragging-post92
account-takeover90
cve76
csrf74
authentication-bypass67
privilege-escalation61
access-control53
phishing48
dos48
defi48
smart-contract47
writeup46
ethereum44
supply-chain44
ssrf44
cloudflare44
open-source43
browser42
sql-injection41
stored-xss39
web339
aws37
web-security36
input-validation36
docker36
ai-agents35
api-security34
smart-contract-vulnerability33
oauth33
reverse-engineering33
react33
idor31
information-disclosure31
node30
burp-suite30
race-condition29
cross-site-scripting29
denial-of-service29
reflected-xss28
web-application27
clickjacking26
0
2/10
This ICLR 2026 paper frames large language model training as lossy compression, demonstrating that LLMs learn optimal compressions of training data for next-sequence prediction that approach Information Bottleneck theoretical bounds. The work shows that compression quality and structure can predict downstream benchmark performance across different model families, providing an information-theoretic framework for understanding LLM learning and representational spaces.
llm
information-theory
compression
interpretability
model-analysis
representational-structure
generalization
information-bottleneck
ICLR 2026
Henry Conklin
Tom Hosking
Tan Yi-Chern
Jonathan D. Cohen
Sarah-Jane Leslie
Thomas L. Griffiths
Max Bartolo
Seraphina Goldfarb-Tarrant
OpenReview