Generative AI, a powerful tool with both positive and negative implications in the digital realm, has sparked discussions among experts regarding its impact on cybersecurity following the popularization of ChatGPT. Despite not witnessing major exploits by malicious actors yet, security experts have been showcasing innovative ways in which generative AI can strengthen cybersecurity.
A collaborative team of researchers from ETH Zürich, Swiss Data Science Center, and SRI International in New York has developed PassGPT, a novel model based on OpenAI’s GPT-2 architecture. PassGPT specializes in generating and guessing passwords by training on millions of leaked passwords from various cyberattacks, notably the infamous RockYou leak. It has been reported that PassGPT can successfully guess 20% more previously unseen passwords compared to state-of-the-art GAN models.
Although PassGPT’s capabilities might evoke concern, its primary objective is to assist users in creating stronger and more intricate passwords, as well as identifying potential passwords based on certain inputs. The model utilizes a progressive sampling technique, constructing passwords character by character to enhance their resistance against cracking. Furthermore, PassGPT surpasses previous models employing generative adversarial networks (GANs), which consist of two competing networks attempting to deceive each other with authentic or fabricated content.
Javi Rando, the creator of PassGPT, has stated that the model can also calculate the probability of any given password while assessing its strength and vulnerabilities. He emphasizes that the model can identify patterns that conventional methods deem strong but are easily guessed through generative techniques. Additionally, the model exhibits the ability to handle passwords in different languages and predict new passwords not present in its dataset.
PassGPT exemplifies how LLMs (large language models) can be adapted to diverse domains and applications by leveraging various data sources. Furthermore, it should be noted that ethical researchers have previously trained generative AI models, such as DarkBERT, using illicit data from the dark web to identify ransomware leak sites and monitor illicit information exchanges.
Leave a comment