AI Is Slowly Outperforming Human-written Phishing Emails, and It Is a Cause of Concern! Every day, millions of phishing emails are sent worldwide by threat actors to lure users into divulging their Personally Identifiable Information or other critical information, such as credit card credentials or bank account details. Here's how a team of researchers leveraged AI to write phishing emails, which seemed more realistic than human-written ones
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
Spear phishing is a social engineering technique targeted towards a targeted individual to divulge confidential information. But creating highly targeted mass spear-phishing emails could take a lot of effort and time. In a recent test conducted by a team of researchers, it was found that they could use Natural Language Processing (NLP) to devise targeted phishing emails. At the end of the research, the team revealed that AI/ML could be used to develop spear-phishing campaigns at a devastating scale.
AI Can Write Better Phishing Emails Than Humans!
In the recently held Black Hat Defcon security conference in Las Vegas, a team of researchers hailing from the Singapore Government Technology Agency presented the results of their AI/ML generated phishing email test. Here's how the event unfolded:
- The team stated that they created phishing emails using the Generative Pre-trained Transformer 3 (GPT 3) autoregressive language model in combination with other AI-as-a-service platforms.
- They tested it by sending targeted phishing emails to more than 200 of their colleagues. Along with that, they also sent phishing emails created manually without the help of GPT-3.
- But unlike actual phishing emails, these emails didn't contain any malicious links or attachments but simply links that sent the total click-through count back to the team.
- The team was surprised to see that the success rate of GPT-3 generated emails was much higher than the manually crafted emails.
Why It Is A Cause For Concern
According to Eugene Lim, a cybersecurity specialist from the team that ran this test, it takes a lot of time and effort to train and develop a good AI model, and it can cost a lot of money. But if one succeeds in developing one and use it to create an AI-as-a-service model, it can save them a significant amount of time and future efforts to use that service to send phishing emails.
And if a threat actor gets their hands on this methodology, it could turn out to be disastrous as they would be able to send bulk emails targeted towards thousands of unsuspecting individuals.
How OpenAI Is Doing its Part in Prevent Malicious Use Of Their AI
OpenAI, the creators of GPT-2, GPT-3 and DALL-E, have stated that every request for GPT 3 access is monitored along with its production use before the product goes live.
OpenAI also stated that they are constantly working to enhance the accuracy of their system so that it doesn't reach the hands of malicious actors. Even though getting access to Open AI API is tough as one has to go through various checks and measures, there are other AI-as-a-service players in the market where one can get easy free trials.
Final Words
The team behind this phishing experiment utilized the facility of AI-as-a-service products and the GPT 3 deep learning model to learn the traits and behavior of their colleagues and generated targeted spear-phishing emails on a massive scale at once. This activity has demonstrated how AI tools can prove to be a double-edged sword; on the one hand, they can help spam detection tools identify and segregate phishing emails from regular ones, while on the other hand, if misused, they can enable threat actors to launch phishing attacks on a massive scale.