A Pentagon official disclosed that the US military is deploying generative AI chatbots (ChatGPT, Claude, Grok) as a conversational layer atop traditional AI systems like Maven to accelerate target prioritization and analysis in military operations, with human verification required before strikes. The disclosure comes amid scrutiny over an airstrike on an Iranian school and ongoing tensions between the Pentagon and AI companies over acceptable use of their models.
Security researchers from Irregular found that LLM-generated passwords from Claude, ChatGPT, and Gemini are fundamentally weak due to predictable patterns, with entropy around 27-20 bits instead of the 98-120 bits expected from truly random passwords. This allows passwords to be brute-forced in hours rather than centuries, despite appearing strong to standard password checkers.