bug-bounty570
xss395
exploit307
google281
rce232
facebook201
microsoft186
writeup143
malware142
cve141
web3127
apple101
account-takeover92
csrf92
open-source91
browser89
phishing71
sqli71
dos66
cloudflare66
ai-agents63
pentest60
ssrf60
privilege-escalation60
reverse-engineering54
supply-chain53
ctf51
auth-bypass49
oauth47
tool46
lfi46
cloud45
aws44
privacy44
node40
react40
idor39
race-condition38
llm37
opinion35
info-disclosure34
cors33
automation33
clickjacking32
machine-learning32
code-generation31
infrastructure31
access-control27
postmessage27
subdomain-takeover26
0
2/10
tutorial
A comprehensive guide for running Alibaba's Qwen3.5 LLM locally, covering multiple model sizes (0.8B to 397B parameters), quantization options, hardware requirements, and configuration settings for both thinking and non-thinking inference modes using llama.cpp and other backends.
llm
local-inference
quantization
gguf
llama-cpp
model-deployment
qwen
fine-tuning
vision-language-model
reasoning
Qwen3.5
Alibaba
Unsloth
llama.cpp
LM Studio
Ollama
Hugging Face