TIME reports on Foundation's Phantom MK-1 humanoid robot being developed and deployed for military applications, including testing in Ukraine and U.S. military contracts worth $24M, while highlighting debates over autonomous weapon autonomy, ethical concerns, and international opposition to lethal autonomous weapon systems.
Nikita Lalwani and Sam Winter-Levy argue that while AI could theoretically enhance first-strike capabilities against nuclear deterrence through submarine tracking, mobile missile targeting, and cyberattacks on command-and-control networks, physics, countermeasures, and the impossibility of testing such systems make achieving near-certain success unrealistic—meaning nuclear deterrence likely remains stable even in advanced AI scenarios.