Get All Access for $5/mo

The Rising Threat of Generative AI in Social Engineering Cyber Attacks — What You Need to Know The rise of generative AI is revolutionizing social engineering cyber attacks, making them more sophisticated and harder to detect. As these threats escalate, individuals and organizations must stay informed, exercise caution and employ robust cybersecurity measures to counteract this new wave of AI-driven cybercrime.

By Yehuda Leibler Edited by Chelsea Brown

Key Takeaways

  • How cyber criminals are using generative AI to carry out social engineering attacks

Opinions expressed by Entrepreneur contributors are their own.

The rapid evolution of artificial intelligence (AI) has brought about significant advancements in various sectors. However, the same technology that now powers our daily lives can also be weaponized by cybercriminals. In fact, we already know AI is being used by hackers. A recent spike in social engineering attacks leveraging generative AI technology has raised alarm bells in the cybersecurity community.

Generative AI, exemplified by tools like OpenAI's ChatGPT, uses machine learning to generate human-like text, video, audio and images. While these tools have numerous beneficial applications, they are also being exploited by malicious actors to carry out sophisticated social engineering attacks. The advanced linguistic capabilities and accessibility of generative AI tools create a breeding ground for cybercriminals, enabling them to craft convincing scams that are increasingly difficult to detect.

Moreover, generative AI can automate the personalization of social engineering attacks on a mass scale. This development is particularly concerning as it erodes one of our most potent defenses against such threats — authenticity. In the face of phishing and similar attacks, our ability to discern genuine communications from fraudulent ones is often our last line of defense. However, as AI becomes more adept at mimicking human communication, our "BS radar" becomes less effective, leaving us more vulnerable to these attacks.

Related: Safeguarding Your Corporate Environment from Social Engineering

How cyber criminals are weaponizing generative AI

A recently published research analysis by Darktrace revealed a 135% increase in social engineering attacks using generative AI. Cyber criminals are using these tools to hack passwords, leak confidential information and scam users across a number of platforms. This new generation of scams has led to a surge in concern among employees, with 82% expressing fears about falling prey to these deceptions.

The threat of AI, in this context, is that it substantially lowers or even eliminates the barrier of entry to fraud and social engineering schemes. Non-native or poorly skilled native speakers benefit from generative AI, which allows them to have error-free text conversations in any language. This makes phishing schemes much harder to detect and defend against.

Generative AI can also help attackers bypass detection tools. It enables the prolific production of what could be seen as "creative" variation. A cyber attacker can use it to create thousands of different texts, all unique, bypassing spam filters that tend to search for repeated messages.

In addition to written communication, other AI engines can produce authoritative-sounding spoken words that can imitate specific people. This means that the voice on the phone that seems like your boss may well be an AI-Based voice-mimicking tool. Organizations should be ready for more complex social engineering attacks that are multi-faceted and creative, such as an email followed by a call imitating the sender's voice, all with consistent and professional-sounding content.

The rise of generative AI means that bad actors with limited English skills can quickly create convincing messages that seem more authentic. Previously, an email riddled with grammatical errors, claiming to be from your insurance agency, was promptly recognized as a fraud and immediately disregarded. However, the advancement of generative AI has significantly eliminated such apparent indicators, making it harder for users to differentiate between authentic communications and fraudulent scams.

Indeed, tools like Chat-GPT have built-in limitations designed to prevent malicious use. For instance, OpenAI has implemented safeguards to prevent the generation of inappropriate or harmful content. However, as recent incidents have shown, these safeguards are not foolproof. A notable example is the case where users were able to trick ChatGPT into providing Windows activation keys by asking it to tell a bedtime story that included them. This incident underscores the fact that while AI developers are making efforts to limit harmful usage, malicious actors are constantly finding ways to circumvent these restrictions, proving that safeguards on AI tools are not a defense mechanism we can count on.

Related: This Type of Cyber Attack Preys on Your Weakness. Here's How to Avoid Being a Victim.

How to protect yourself and your organization from AI-driven social engineering attacks

The defense against these threats is multi-faceted. Organizations need to make use of real-time fraud protection capable of detecting more than the usual red flags that scream fraud. Some experts suggest fighting fire with fire and using advanced learning methods to determine suspicious attempts and potentially discover AI-generated phishing texts.

To defend against AI-driven social engineering attacks and ensure robust personal security, we must adopt a multi-faceted approach. This includes using strong and unique passwords, enabling two-factor authentication, being wary of unsolicited communications, keeping software and systems updated and educating oneself about the latest cybersecurity threats and trends.

While the emergence of free, simple, accessible AI benefits cyber attackers enormously, the solution is better tools and better education — better cybersecurity all around. The sector must initiate strategies that pit machine against machine, rather than human versus machine. To achieve this, we need to contemplate sophisticated detection systems capable of identifying threats generated by AI, thereby decreasing the duration for identification and resolution of social engineering attacks emanating from generative AI.

In conclusion, the rapid advancements in generative AI technology present both opportunities and risks. Moving forward, the increasing risk of social manipulation through AI-enriched tactics necessitates heightened awareness and precaution from both individuals and entities. They must utilize comprehensive cybersecurity strategies to outmaneuver potential adversaries. We are already living in an era where generative AI is leveraged in cyber criminal activities, hence it's essential to stay alert, ready to counteract these threats using all available resources.

Related: 5 Ways to Protect Your Company From Cybercrime

Yehuda Leibler

Entrepreneur Leadership Network® Contributor

Co Founder at ARX

Yehuda Leibler is the Co-Founder and Chief Technology & Strategy Officer of ARX, a company specializing in data and AI driven solutions and advisory for the capital markets. He previously served as the CEO of Cortex Group, a leading technology consultancy and is a partner at Invicta Ventures.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.

Business News

These Companies Offer the Best Work-Life Balance, According to Employees

The ranking is based on Glassdoor ratings and reviews.

Science & Technology

Use This Framework to Successfully Integrate AI Into Your Business Operations

Here's how to ensure both innovation and compliance when using AI in your organization.

Leadership

Why Your AI Strategy Will Fail Without the Right Talent in Place

Using fractional AI experts through specialized platforms allows companies to access top talent cost-effectively, drive innovation and scale agile strategies for growth.

Growing a Business

5 Effective Strategies to Boost Your Business's Online Presence

Boosting your online presence in 2025 is the key to success for businesses looking to grow. Working on your branding and reputation management is important to drive more sales and improve conversion.