Skip to content
logo The magazine for digital lifestyle and entertainment
Artificial intelligence News Security All topics
Not Sure

Study Warns Against AI-Generated Passwords

AI Password
A new study reveals the insecurity of AI-generated passwords Photo: Getty Images
Share article

February 25, 2026, 1:37 pm | Read time: 3 minutes

Let ChatGPT create a password? Sounds convenient and secure. But this can actually be a security issue. Language models seem intelligent and creative, but they hit a fundamental limit when it comes to password creation.

A recent study by the security company “Irregular” shows that passwords generated by Large Language Models (LLMs) may appear complex but contain recognizable patterns. The reason lies in the operating principle of these systems. LLMs are trained to predict the most likely next character. However, this is the opposite of what a secure password needs. A password must be generated in a uniformly and unpredictably random manner—not statistically plausible.

Why LLMs Are Structurally Unsuitable

According to a report by “heise,” the study found recurring sequences and similar structures in tests with OpenAI ChatGPT 5.2, Claude Opus 4.6, and Google Gemini 3 Flash. In some cases, the models even generated identical or very similar passwords multiple times. The measured entropy, which describes the degree of unpredictability, was significantly below the level of a cryptographically secure password, meaning a password that is considered practically unbreakable by mathematical security standards.

Because language models do not distribute characters uniformly randomly but instead create typical patterns, the passwords become more predictable. This makes attacks easier. In a so-called brute-force attack, an attacker automatically tests new character combinations until the correct password is found. The more predictable the structure, the faster the attacker reaches the goal. While a cryptographically strong password can theoretically withstand for decades, the AI-generated passwords measured in the study, with significantly lower entropy, could be cracked in hours or a few days. The researchers explicitly advise against using LLMs for password generation.

More on the topic

And What About Other Systems?

Computers cannot generate true randomness on their own. As the content delivery provider Cloudflare explains, they operate deterministically. This means the same inputs always lead to the same outputs. This predictability is intended for normal software. However, it is a disadvantage for encryption and password security.

Therefore, systems that require true randomness use so-called cryptographically secure random number generators. These rely on unpredictable inputs from the real world to generate mathematically unbreakable values.

Cloudflare demonstrates what such entropy sources can look like. The infrastructure provider uses physical, chaotic processes to generate cryptographic keys. For example, the movements of lava lamps in San Francisco, a double pendulum in London, or radioactive decay in Singapore. These processes are physically unpredictable and thus provide true random values.

What Does This Mean for Practice?

If you want to create secure passwords, you don’t need 50 lava lamps in your living room. An established password manager is completely sufficient. It uses the cryptographically secure random functions of the operating system to generate truly unpredictable sequences of characters and then stores them encrypted.

Also interesting: Why Politeness Costs OpenAI Millions of Dollars

Language models may seem impressive. However, they are not built for true randomness. When it comes to security, you should rely on tools specifically designed for that purpose.

This article is a machine translation of the original German version of TECHBOOK and has been reviewed for accuracy and quality by a native speaker. For feedback, please contact us at info@techbook.de.

You have successfully withdrawn your consent to the processing of personal data through tracking and advertising when using this website. You can now consent to data processing again or object to legitimate interests.