bug-bounty249
google212
facebook172
microsoft169
apple126
rce97
exploit89
web352
open-source44
smart-contract42
writeup42
defi41
sqli39
aws38
ethereum38
dos36
docker36
ai-agents36
access-control35
cloudflare35
malware34
cve34
ssrf33
react32
xss31
account-takeover28
subdomain-takeover27
supply-chain26
oauth25
idor25
bragging-post24
smart-contract-vulnerability23
cors22
wordpress22
node22
browser22
privilege-escalation21
race-condition20
automation20
auth-bypass19
cloud19
pentest19
tool19
authentication-bypass18
machine-learning18
denial-of-service17
llm17
vulnerability-disclosure17
ctf17
rust16
0
3/10
Nikita Lalwani and Sam Winter-Levy argue that while AI could theoretically enhance first-strike capabilities against nuclear deterrence through submarine tracking, mobile missile targeting, and cyberattacks on command-and-control networks, physics, countermeasures, and the impossibility of testing such systems make achieving near-certain success unrealistic—meaning nuclear deterrence likely remains stable even in advanced AI scenarios.
nuclear-deterrence
ai-security
geopolitics
strategic-stability
command-and-control
cybersecurity
first-strike-capability
nuclear-submarines
missile-defense
arms-race
strategic-vulnerability
Nikita Lalwani
Sam Winter-Levy
Carnegie Endowment for International Peace
White House National Security Council
Iron Dome
Dead Hand
80,000 Hours