AI-generated passwords aren't random, it just looks that way

theregister.com · pabs3 · 12 hours ago · view on HN · research
quality 7/10 · good
0 net
AI Summary

Security researchers from Irregular found that LLM-generated passwords from Claude, ChatGPT, and Gemini are fundamentally weak due to predictable patterns, with entropy around 27-20 bits instead of the 98-120 bits expected from truly random passwords. This allows passwords to be brute-forced in hours rather than centuries, despite appearing strong to standard password checkers.

Entities
Irregular Claude ChatGPT Gemini OpenAI Google Anthropic Dario Amodei HackerOne 1Password Bitwarden GitHub
LLM-generated passwords 'fundamentally weak,' experts say • The Register Sign in / up The Register Topics Special Features Special Features Vendor Voice Resources Resources Security 101 Your AI-generated password isn't random, it just looks that way 101 Seemingly complex strings are actually highly predictable, crackable within hours Connor Jones Wed 18 Feb 2026 // 14:06 UTC Generative AI tools are surprisingly poor at suggesting strong passwords, experts say. AI security company Irregular looked at Claude, ChatGPT, and Gemini, and found all three GenAI tools put forward seemingly strong passwords that were, in fact, easily guessable. Prompting each of them to generate 16-character passwords featuring special characters, numbers, and letters in different cases, produced what appeared to be complex passphrases. When submitted to various online password strength checkers, they returned strong results. Some said they would take centuries for standard PCs to crack. The online password checkers passed these as strong options because they are not aware of the common patterns. In reality, the time it would take to crack them is much less than it would otherwise seem. Irregular found that all three AI chatbots produced passwords with common patterns, and if hackers understood them, they could use that knowledge to inform their brute-force strategies. The researchers took to Claude, running the Opus 4.6 model , and prompted it 50 times, each in separate conversations and windows, to generate a password. Of the 50 returned, only 30 were unique (20 duplicates, 18 of which were the exact same string), and the vast majority started and ended with the same characters. Irregular also said there were no repeating characters in any of the 50 passwords, indicating they were not truly random. Tests involving OpenAI's GPT-5.2 and Google's Gemini 3 Flash also revealed consistencies among all the returned passwords, especially at the beginning of the strings. The same results were seen when prompting Google's Nano Banana Pro image generation model. Irregular gave it the same prompt, but to return a random password written on a Post-It note, and found the same Gemini password patterns in the results. The Register repeated the tests using Gemini 3 Pro, which returns three options (high complexity, symbol-heavy, and randomized alphanumeric), and the first two generally followed similar patterns, while option three appeared more random. Notably, Gemini 3 Pro returned passwords along with a security warning, suggesting the passwords should not be used for sensitive accounts, given that they were requested in a chat interface. It also offered to generate passphrases instead, which it claimed are easier to remember but just as secure, and recommended users opt for a third-party password manager such as 1Password, Bitwarden, or the iOS/Android native managers for mobile devices. Irregular estimated the entropy of the LLM-generated passwords using the Shannon entropy formula and by understanding the probabilities of where characters are likely to appear, based on the patterns displayed by the 50-password outputs. HackerOne 'updating' Ts&Cs after bug hunters question if they're training AI GPT-5 bests human judges in legal smack down How AI could eat itself: Competitors can probe models to steal their secrets and clone them OK, so Anthropic's AI built a C compiler. That don't impress me much The team used two methods of estimating entropy, character statistics and log probabilities. They found that 16-character entropies of LLM-generated passwords were around 27 bits and 20 bits respectively. For a truly random password, the character statistics method expects an entropy of 98 bits, while the method involving the log probabilities of the LLM itself expects an entropy of 120 bits. In real terms, this would mean that LLM-generated passwords could feasibly be brute-forced in a few hours, even on a decades-old computer, Irregular claimed. Knowing the patterns also reveals how many times LLMs are used to create passwords in open source projects. The researchers showed that by searching common character sequences across GitHub and the wider web, queries return test code, setup instructions, technical documentation, and more. Ultimately, this finding may usher in a new era of password brute-forcing, Irregular said. It also cited previous comments made by Dario Amodei, CEO at Anthropic, who said last year that AI will likely be writing the majority of all code, and if that's true, then the passwords it generates won't be as secure as expected. "People and coding agents should not rely on LLMs to generate passwords," said Irregular. "Passwords generated through direct LLM output are fundamentally weak, and this is unfixable by prompting or temperature adjustments: LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation." The team also said that developers should review any passwords that were generated using LLMs and rotate them accordingly. It added that the "gap between capability and behavior likely won't be unique to passwords," and the industry should be aware of that as AI-assisted development and vibe coding continues to gather pace. ® Share More about AI Large Language Model Password More like these × More about AI Large Language Model Password Narrower topics AIOps Amazon Bedrock Anthropic ChatGPT Credential stuffing DeepSeek Gemini Google AI GPT-3 GPT-4 LastPass Machine Learning MCubed Neural Networks NLP Retrieval Augmented Generation Star Wars Tensor Processing Unit TOPS Broader topics Security Self-driving Car More about Share 101 COMMENTS More about AI Large Language Model Password More like these × More about AI Large Language Model Password Narrower topics AIOps Amazon Bedrock Anthropic ChatGPT Credential stuffing DeepSeek Gemini Google AI GPT-3 GPT-4 LastPass Machine Learning MCubed Neural Networks NLP Retrieval Augmented Generation Star Wars Tensor Processing Unit TOPS Broader topics Security Self-driving Car TIP US OFF Send us news Other stories you might like Openreach: Fiber can sniff out leaky water pipes – if anyone bothers fixing them Distributed Acoustic Sensing tech uses broadband cables to pinpoint plumbing faults Networks 13 Mar 2026 | 1 BOFH: What physics defines as impossible, sales calls a challenge Episode 5 The Boss imagineers a new laptop spec with help from AI BOFH 13 Mar 2026 | 11 Blustering Blackbeard's PC was all at sea, sysadmin got him shipshape in seconds On Call Have you tried turning it on, never mind off and on again? On-Prem 13 Mar 2026 | 36 Why high-performance Java is becoming a business imperative A new generation of JVM technologies is reshaping how businesses build, deploy, and scale mission-critical Java applications. Sponsored Feature AI Burning Man happens next week – here's what The Register expects at GTC 2026 From Groq-ing about tokenomics to OpenClaw and the silicon that powers it, our predictions for the hottest ticket in town Systems 13 Mar 2026 | 4 Prince of PDFs, Adobe CEO Shantanu Narayen, to step down after 18 years Didn’t say why, but for once AI may not be the reason for a lost job SaaS 13 Mar 2026 | 3 Apple takes a bite out of app store fees in China Beijing hinted it wasn’t happy with Cupertino, which weeks later made a change Personal Tech 13 Mar 2026 | 1 Pentagon AI chief praises Palantir tech for speeding battlefield strikes Going from eight systems to one means fewer people make decisions to unleash Epic Fury Public Sector 13 Mar 2026 | 16 Rogue AI agents can work together to hack systems and steal secrets Prompt like a hard-ass boss who won't tolerate failure and bots will find ways to breach policy Research 12 Mar 2026 | 4 Perplexity: Everything is Computer, everything is AI, Computer is everything, AI is us Everything extends its cloud Computer to enterprises, your computer AI + ML 12 Mar 2026 | 4 District denies enrollment to child based on license plate reader data Automated checks raised doubts, though key questions remain unanswered Public Sector 12 Mar 2026 | 26 Microsoft Copilot now boarding your health information It's safe and secure, Redmond insists, but don't expect medical advice AI + ML 12 Mar 2026 | 7 The Register Biting the hand that feeds IT About Us Contact us Advertise with us Who we are Our Websites The Next Platform DevClass Blocks and Files Your Privacy Cookies Policy Privacy Policy Ts & Cs Copyright. All rights reserved © 1998–2026