bug-bounty242
google206
facebook167
microsoft166
apple124
rce95
exploit84
web351
open-source44
smart-contract42
defi41
writeup40
ethereum38
aws37
dos36
docker36
ai-agents36
sqli36
access-control35
cloudflare35
malware34
cve34
react32
ssrf32
xss27
supply-chain26
account-takeover25
bragging-post24
idor24
smart-contract-vulnerability23
subdomain-takeover23
browser22
node22
cors21
wordpress21
privilege-escalation21
oauth21
automation20
race-condition20
cloud19
tool19
machine-learning18
authentication-bypass18
pentest18
llm17
vulnerability-disclosure17
ctf17
denial-of-service17
buffer-overflow16
phishing16
0
3/10
Nikita Lalwani and Sam Winter-Levy argue that while AI could theoretically enhance first-strike capabilities against nuclear deterrence through submarine tracking, mobile missile targeting, and cyberattacks on command-and-control networks, physics, countermeasures, and the impossibility of testing such systems make achieving near-certain success unrealistic—meaning nuclear deterrence likely remains stable even in advanced AI scenarios.
nuclear-deterrence
ai-security
geopolitics
strategic-stability
command-and-control
cybersecurity
first-strike-capability
nuclear-submarines
missile-defense
arms-race
strategic-vulnerability
Nikita Lalwani
Sam Winter-Levy
Carnegie Endowment for International Peace
White House National Security Council
Iron Dome
Dead Hand
80,000 Hours