bug-bounty506
xss285
rce141
bragging-post119
account-takeover104
google96
open-source93
exploit88
csrf85
authentication-bypass80
stored-xss74
facebook74
privilege-escalation70
microsoft68
access-control67
ai-agents64
web-security63
reflected-xss63
cve61
writeup58
input-validation52
ssrf51
sql-injection49
smart-contract48
defi48
cross-site-scripting47
tool46
ethereum45
privacy44
information-disclosure44
malware44
api-security41
web-application38
phishing38
llm37
burp-suite36
opinion36
lfi35
automation35
apple34
smart-contract-vulnerability33
infrastructure33
web333
responsible-disclosure33
html-injection33
vulnerability-disclosure33
machine-learning32
code-generation31
waf-bypass31
idor31
0
6/10
technical-writeup
A detailed account of troubleshooting open-source ML infrastructure when post-training the Kimi-K2-Thinking 1T parameter model, exposing bugs and inefficiencies in HuggingFace Transformers and quantization libraries that aren't documented and can hide several layers in the dependency stack.
model-training
large-language-models
lora
quantization
huggingface
pytorch
debugging
infrastructure
open-source
mixture-of-experts
flash-attention
Kimi-K2-Thinking
HuggingFace
LLaMA-Factory
KTransformers
DeepSeek-V3
PyTorch
vLLM
compressed_tensors
TriviaQA
PEFT
Transformers