bug-bounty525
xss296
rce184
google175
exploit143
microsoft137
facebook137
malware135
account-takeover122
bragging-post117
cve113
privilege-escalation96
open-source88
csrf88
authentication-bypass83
phishing78
stored-xss75
access-control69
ai-agents66
apple64
web-security64
reflected-xss63
writeup63
reverse-engineering54
input-validation53
sql-injection51
ssrf51
browser50
cross-site-scripting49
dos48
smart-contract48
defi48
api-security47
supply-chain47
lfi45
ethereum45
information-disclosure44
tool43
privacy43
cloudflare41
web-application39
race-condition38
ctf38
opinion37
ai-security37
burp-suite37
vulnerability-disclosure37
llm37
web337
oauth36
0
2/10
RunAnywhere released MetalRT, a Metal GPU-optimized inference engine for Apple Silicon that achieves 1.67x faster LLM decode than llama.cpp and 4.6x faster speech-to-text than mlx-whisper through custom GPU shaders and zero-allocation inference. They also open-sourced RCLI, a voice AI pipeline combining STT, LLM, and TTS with sub-600ms end-to-end latency entirely on-device.
ai-inference
apple-silicon
gpu-optimization
metal-shaders
llm-performance
speech-recognition
text-to-speech
on-device-ai
macos
open-source-tool
RunAnywhere
MetalRT
RCLI
YC W26
Sanchit
Shubham
llama.cpp
Apple MLX
Ollama
sherpa-onnx
mlx-whisper
Qwen3
LFM2.5