bug-bounty498
xss349
exploit267
google251
rce195
facebook191
microsoft178
malware157
cve132
web3125
writeup117
apple98
open-source91
csrf83
phishing75
browser73
sqli72
account-takeover72
dos67
ai-agents63
cloudflare61
supply-chain61
privilege-escalation59
pentest55
reverse-engineering54
ssrf50
auth-bypass49
ctf47
tool46
cloud45
privacy44
lfi38
aws38
llm37
race-condition37
oauth36
opinion35
idor34
automation33
machine-learning32
node32
code-generation31
infrastructure31
info-disclosure30
buffer-overflow29
clickjacking28
access-control27
react27
cors26
subdomain-takeover25
0
6/10
technical-writeup
A detailed account of troubleshooting open-source ML infrastructure when post-training the Kimi-K2-Thinking 1T parameter model, exposing bugs and inefficiencies in HuggingFace Transformers and quantization libraries that aren't documented and can hide several layers in the dependency stack.
model-training
large-language-models
lora
quantization
huggingface
pytorch
debugging
infrastructure
open-source
mixture-of-experts
flash-attention
Kimi-K2-Thinking
HuggingFace
LLaMA-Factory
KTransformers
DeepSeek-V3
PyTorch
vLLM
compressed_tensors
TriviaQA
PEFT
Transformers