bug-bounty424
xss286
google262
microsoft220
facebook194
apple141
rce139
malware103
exploit101
account-takeover93
bragging-post92
cve79
csrf76
authentication-bypass67
privilege-escalation62
access-control53
phishing49
dos49
defi48
smart-contract47
supply-chain46
writeup46
browser45
ethereum44
ssrf44
cloudflare44
open-source43
sql-injection41
stored-xss39
web339
aws37
web-security36
input-validation36
docker36
reverse-engineering35
ai-agents35
react34
api-security34
oauth33
smart-contract-vulnerability33
idor31
information-disclosure31
race-condition30
burp-suite30
node30
cross-site-scripting29
denial-of-service29
reflected-xss28
web-application27
clickjacking26
0
2/10
This ICLR 2026 paper frames large language model training as lossy compression, demonstrating that LLMs learn optimal compressions of training data for next-sequence prediction that approach Information Bottleneck theoretical bounds. The work shows that compression quality and structure can predict downstream benchmark performance across different model families, providing an information-theoretic framework for understanding LLM learning and representational spaces.
llm
information-theory
compression
interpretability
model-analysis
representational-structure
generalization
information-bottleneck
ICLR 2026
Henry Conklin
Tom Hosking
Tan Yi-Chern
Jonathan D. Cohen
Sarah-Jane Leslie
Thomas L. Griffiths
Max Bartolo
Seraphina Goldfarb-Tarrant
OpenReview