How Dark LLMs are Posing Threats To Banks In 2023, India recorded 79 million phishing attacks, ranking third worldwide
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
In 2024, chatbots such as ChatGPT, Gemini, and Claude have become a part of our daily lives. However, not many are aware of the foundation of such chatbots- large language model (LLM). LLMs are pre-trained models on vast amounts of publicly available data, enabling them to predict answers. These models are trained in a way that refrains from providing any harmful or erroneous information. If one asks, "How to spread misinformation," it will respond with "I can't assist with that." But what if a model is trained detrimentally? This is where Dark LLMs come into play, enabling malicious activities such as spreading misinformation, assisting hackers in cyberattacks, and other nefarious activities.
The dark side of LLMs
The general public uses Bard and ChatGPT for email writing, resume writing or educating oneself on various topics. This is called Standard LLMs where it assists, educates, and provides valuable, ethical services. On the other hand, Dark LLMs serve a different purpose.
It helps hackers perform harmful tasks such as phishing or smishing attacks, malware distribution, ransomware attacks, social engineering attacks, and mule account creation (Dark LLMs generate realistic synthetic identities that can bypass automated fraud detection systems). In 2023, India recorded 79 million phishing attacks, ranking third worldwide.
BioCatch's report 'Digital Banking Fraud Trends in India' states that every device in India found to participate in mule activity, logged into an average of 35 accounts each.
"Dark LLMs are trained on unethical datasets, including compromised and illicit information, unlike standard LLMs, which are trained on diverse, publicly available, and vetted datasets. Dark LLMs often bypass security measures, manipulate markets, and steal sensitive information, whereas standard LLMs are applied for customer service, content creation, and language translation, among other services," said Pranav Patil, Chief Data Scientist, AdvaRisk.
Hackers Favorite: Banking and Financial Sector
Banking and financial institutions are prime targets for hackers seeking to steal money, and sensitive data, and install malware for remote attacks. Indian financial institutions have been hit by various cyberattacks. In 2018 alone, it faced the Cosmos Bank Cyber Attack where hackers managed to steal INR 94 crore and the City Union Bank Cyber Heist where malicious actors initiated three illegal transactions amounting to USD two million. In 2019, the State Bank of India, suffered a data breach, losing millions of sensitive records. More recently, a significant ransomware attack hit C-Edge Technologies, disrupting payment systems for approximately 300 small banks across India.
These attacks are just the beginning. In the coming years, Dark LLMs are expected to escalate such threats by generating malicious code, exploiting vulnerabilities, and crafting targeted spear-phishing emails. Santosh Kumar Jha, Co-founder & CTO, Zeron feels that the potency of phishing attacks will skyrocket with the use of sophisticated AI frameworks, such as NIST AI RMF, due to their scarily life-like mimicking of organic communications in messaging. "The imitation Dark LLMs create, powered by these sophisticated AI frameworks, replicates that of natural messages, which increases the percentage of customers' data and money that is put at risk," he shares.
In brute-force attacks, hackers use trial and error to crack passwords, login credentials, and encryption keys and India accounted for 9.04 million such attacks in 2021. "Dark LLMs are capable of facilitating large-scale automated cyberattacks and generating combinations for brute-force attacks on login systems," adds Patil. On dangers to financial institutions, Jha shares "More than just scaling things up with automation, it is their use of advanced language models that generates quite an alarming level of danger for financial institutions."
Dark LLMs can be used to create fake financial documents, invoices, and contracts that appear legitimate, leading to potential financial losses for businesses and individuals. "The spread of misinformation can also be used to manipulate markets and tarnish the reputation of companies," highlights Patil.
In response to such attacks, what proactive measures can banks adopt? Jha says banks should immediately isolate affected systems, undertake forensic investigations, and upgrade security protocols to close vulnerabilities exploited by Dark LLMs. "Banks need to develop advanced threat detection systems to trace and neutralize phishing attempts and other forms of attacks from Dark LLMs. He emphasized the importance of adhering to security standards like ISO/SAE 21434 v2021 and ensuring adequate training for both employees and customers.
Common People's Case
How can citizens protect themselves from falling victim to these malicious activities? The extensive use of social media platforms and online payment systems is increasing cyber threats to common citizens. Lack of awareness of cyber threats and what should be the appropriate immediate actions after being attacked—make it more vulnerable. Sophisticated threats such as deep fake fraud, phishing 2.0, and AI-driven attacks using dark LLMs are especially difficult for the average person to recognize.
One needs to stop sharing personal information on online platforms. The use of complex passwords and two-factor authentication is essential to secure public accounts. When communicating online, always verify the sender's identity before clicking on links or downloading attachments. Avoid clicking on links in unsolicited emails, and be cautious of unknown shopping sites, no matter how real they appear. Never fill in personal details on websites that promise lotteries or bonuses, and don't share your personal information with individuals claiming to be job recruiters without proper verification. Additionally, refrain from downloading applications or software from random sites and avoid forwarding random links to friends and family.
It's also crucial to monitor financial accounts regularly and report any suspicious activity to the National Cyber Crime Reporting Portal.
In coming times, Patil warns that hackers would likely be able to conduct automated fraudulent financial transactions on a larger scale. For instance, it takes a lot of manual work for hackers to create personalized emails and messages, but with the help of dark LLMs, it will be a seamless process for them to do it in seconds for thousands of targets. They may also create fake financial advisers to deceive individuals into making poor investment decisions.
"Dark LLMs could be used to mimic employees and conduct insider attacks, stealing sensitive data or funds. Additionally, they may generate fake compliance documents to bypass regulatory scrutiny, leading to systemic financial risks. The creation of deepfake identities could also occur on a large scale, allowing hackers to open bank accounts using live deepfake videos and engage in unethical activities by creating deepfake social networks," he concluded.
Discover more about the Dark LLM chatbots that are currently in operation.