bug-bounty504
xss264
rce148
bragging-post119
google117
account-takeover110
authentication-bypass94
privilege-escalation93
open-source92
facebook91
csrf86
malware85
microsoft84
exploit79
access-control75
stored-xss75
ai-agents67
cve64
web-security64
reflected-xss63
phishing60
input-validation52
information-disclosure52
sql-injection51
smart-contract49
cross-site-scripting48
defi48
privacy47
ssrf46
reverse-engineering46
tool46
ethereum46
api-security44
writeup40
vulnerability-disclosure40
ai-security38
web-application38
apple37
llm37
burp-suite37
opinion37
dos37
automation35
responsible-disclosure35
cloudflare35
web334
smart-contract-vulnerability33
supply-chain33
infrastructure33
race-condition33
0
5/10
tutorial
A mathematical optimization guide explaining how to avoid trigonometric functions in 3D graphics by leveraging dot and cross products directly, using rotation alignment as a concrete example to demonstrate more efficient and numerically stable approaches.
computer-graphics
vector-math
optimization
mathematical-algorithms
rotation-matrices
cross-product
dot-product
3d-rendering
numerical-stability
Inigo Quilez
0
8/10
A deep technical exploration of porting a Flash Attention kernel from GPU (Triton) to TPU using JAX, covering the fundamental differences in programming models, compiler behavior, and hardware architectures. The author details how JAX's functional, immutable paradigm and XLA compilation differ from explicit GPU kernel writing, and includes benchmarking and a custom systolic array emulator to understand TPU data flow.
flash-attention
jax
tpu
kernel-optimization
attention-mechanism
llm-internals
xla-compiler
systolic-array
triton
gpu
compiler-optimization
numerical-stability
online-softmax
Archer Zhang
JAX
XLA
Triton
TPU
Colab
Flash Attention