bug-bounty498
google355
xss301
microsoft298
facebook263
rce211
exploit200
malware171
apple164
cve136
account-takeover115
bragging-post102
privilege-escalation95
csrf90
phishing86
browser75
writeup74
authentication-bypass69
supply-chain68
dos66
stored-xss65
reflected-xss57
ssrf56
reverse-engineering55
react52
access-control51
input-validation49
cross-site-scripting48
aws47
cloudflare47
docker46
web-security46
lfi46
sql-injection45
smart-contract45
ethereum44
web-application44
web343
defi43
ctf43
oauth43
node43
pentest40
race-condition39
idor37
open-source37
cloud37
burp-suite36
info-disclosure36
auth-bypass35
0
2/10
An introductory overview of artificial intelligence fundamentals, how ML models learn, and applications of AI in cybersecurity for both attackers and defenders, explained in accessible terms.
0
3/10
Nikita Lalwani and Sam Winter-Levy argue that while AI could theoretically enhance first-strike capabilities against nuclear deterrence through submarine tracking, mobile missile targeting, and cyberattacks on command-and-control networks, physics, countermeasures, and the impossibility of testing such systems make achieving near-certain success unrealistic—meaning nuclear deterrence likely remains stable even in advanced AI scenarios.
nuclear-deterrence
ai-security
geopolitics
strategic-stability
command-and-control
cybersecurity
first-strike-capability
nuclear-submarines
missile-defense
arms-race
strategic-vulnerability
Nikita Lalwani
Sam Winter-Levy
Carnegie Endowment for International Peace
White House National Security Council
Iron Dome
Dead Hand
80,000 Hours