Malicious AI in FinTech: 2024 Threat Prediction

The development of digital AI systems has expanded the range of possibilities for criminals to conduct attacks in the Web3 space, including sophisticated phishing and malware, as well as manipulating LLMs to supply developers with false information.

Robot stealing money
WormGPT is among the popular remodeled AI systems based on ChatGPT that aids scammers in generating convincing emails for phishing attacks.

In its recent "Story of the Year," Kaspersky, one of the leading cybersecurity firms, has shared an overview of emerging threats stemming from the development of artificial intelligence (AI) and machine learning (ML) algorithms. It highlights the misuse of generative AI by cybercriminals, encompassing scenarios involving the creation of malware, phishing emails, and deepfakes, and provides an overview of the 2024 trends.

Unfortunately, many of the cybersecurity risks mentioned by Kaspersky can have a serious effect on crypto users.

The ubiquity of artificial intelligence vs trust and reliability issues

Among the ever-growing number of AI users, many tend to grant excessive trust to ML-powered applications. However, Kaspersky emphasizes that "the technology is very new and not yet mature," citing the term "hallucinating," which was named the "word of the year for 2023" by the Cambridge Dictionary, with one of its primary definitions being the provision of false information by AI.

"LLMs [large language models] are known not just to produce outright falsehoods but to do it very convincingly," Kaspersky adds.

Read also: Inferno Drainer Is Dead, but Angel Drainer Thrives

AI script writer and content generator producing false information

One of the common threats facing the cryptocurrency community is the propagation of false information by AI, which can lead to the spread of misguided investment strategies and potential financial losses. Purposefully altered AI, modified intentionally to produce false information, can generate the AI response motivating the system’s users to take actions beneficial for criminals.

Furthermore, misplaced trust in AI and a lack of awareness can also impact Web3 developers. LLMs may suggest code snippets and development solutions containing flaws, resulting in vulnerabilities in critical components of crypto projects, including smart contracts and wallets.

Limited risk management

Kaspersky also emphasizes the current relatively low effectiveness of AI tools in identifying and flagging phishing threats, which are excessively common in the crypto space.

AI script generator vulnerabilities and threats for crypto users and projects

According to Kaspersky, instruction-following language learning models are particularly vulnerable to two specific cybersecurity threats: prompt injection and prompt extraction.

Altered AI: prompt injection

Prompt injection involves overwriting system prompts and changing instructions for the LLM. By manipulating LLM-based systems, hackers can develop chatbots instructed to execute malicious actions that are otherwise prevented by system instructions. As mentioned earlier, when such a remodeled AI answers questions, the AI response is falsified to provide its users with malicious information. For instance, an AI tool can generate scripts and responses containing false information about market trends and other aspects of investment, which can impair decision making and take users far from achieving their financial goals.

ChatGPT jailbreaking
Source: Kaspersky, SecureList

Prompt extraction threatening the financial sector

Moreover, prompt injection supports another malicious practice specific to the LLM technology, known as prompt extraction, which involves retrieving proprietary or sensitive information accessible to a chatbot.

These threats pose a significant danger to the integrity and security of crypto platforms, as criminals can gain knowledge about the tools used for developing the project's systems, as well as their vulnerabilities.

Remodeled AI tool through jailbreaking

In addition to these risks, Web3 projects relying on AI tools can become victims of jailbreaking, which poses a risk of reputational damage, especially if the AI-based chatbot used by these projects is involved in the development of the project’s branding of financial companies. According to Kaspersky, jailbreaking is the practice of overcoming restrictions on certain topics, such as discussing people based on their demographics or the preparation of abusive substances, which are introduced by legitimate machine learning developers. In other words, jailbreaking turns a legitimate AI tool into an uncensored AI generator.

Further implications for FinTech companies

The impact of these threats becomes even more severe in Web3 projects using AI in the outside world, for instance, for scheduling appointments or sending emails. Reputational damage, data breaches, and phishing attacks are only some of the examples of issues that can be caused by these threats.

In its "Vulnerability Severity Classification for AI Systems," Microsoft mentions further security threats associated with ML models that can compromise Web3 projects.

Artificial intelligence fails prey to input perturbation

Among inference manipulation techniques related to LLMs, input perturbation, also known as adversarial examples or model evasion, can be potentially harmful. Similarly to the command injection mentioned earlier, this method requires manipulating the model’s response. However, input perturbation specifically focuses on compromising models, making them produce incorrect outputs.

Vast data sets compromised and affected natural language processing

Model poisoning or data poisoning, "the ability to poison the model by tampering with the model architecture, training code, hyperparameters, or training data," as per Microsoft, allows malicious actors to introduce unintended behaviors and backdoors in the trained model, which can supply crypto users with compromised content. This, in turn, can help malicious actors leverage poor decisions made by affected users.

Data analytics and prompt extraction

In addition to prompt extraction mentioned by Kaspersky, Microsoft also cites input extraction, or "the ability to extract or reconstruct other users’ inputs to the model," which can lead to severe data breaches.

FinTech industry compromised by model stealing

One more notable threat related to AI systems that can affect crypto projects and their users is model stealing, described by Microsoft as "the ability to infer/extract the architecture or weights of the trained model." One possibility to abuse the system provided to criminals by model stealing is creating a functionally equivalent copy. In the context of cryptocurrency projects, this threat can be particularly serious, especially if the stolen model is used to make decisions or generate content that impacts users or the overall system.

Malicious AI bot breakthrough of the year

As Kaspersky reports, the threats discussed earlier are not a mere possibility. They are already a reality.

For example, in 2023, The Kaspersky Digital Footprint Intelligence team identified numerous instances of generative AI misuse on the dark web, involving scenarios like generating malware, automatic replies on dark web forums, and developing malicious tools and jailbreak commands. The Kaspersky team also cites the development of black hat chatbot counterparts of tools and uncensored AI generators, which were initially intended for legitimate purposes, for instance, WormGPT.

Read also: Over $8.4 Million Stolen in Crypto Exploits Within a Single Week

Worm GPT - tailored solutions for scammers

WormGPT is an alternative to ChatGPT, described as "easier to use for nefarious purposes" by the team behind the AI-empowered multi-channel security tool SlashNext.

According to SlashNext, WormGPT is an AI module based on the GPTJ language model, developed in 2021. The tool boasts a range of features, including unlimited character support, chat memory retention, and code formatting capabilities.

WormGPT
Source: SlashNext

Convincing AI tool used to write scripts and malicious emails

SlashNext highlights the "remarkable persuasiveness" and "strategic cunningness" of emails created by WormGPT for effective BEX and phishing attacks, which make fraud detection rather challenging. The impeccable grammar of WormGPT is noted as one of its greatest advantages for scammers, significantly reducing the probability of the emails it generates being flagged as suspicious. Due to its capabilities, it remains an accessible tool even for attackers with limited skills who want to advertise phishing scams as legitimate investment opportunities or carry out other types of social engineering attacks.

Machine learning algorithms used to deceive scam victims

The use of AI in phishing attacks is prevalent, as ML algorithms allow these tools to create particularly persuasive and effective phishing content. However, Kaspersky notes that "high-profile BEC [business email compromise] attacks are most likely operated by skilled criminals who can make do without a writing aid, while spam messages are usually blocked based on metadata rather than their contents."

Malicious video ads and deepfakes

Kaspersky emphasizes that crafting deepfakes and voicefakes, potentially useful for attacks based on impersonation, still requires significant skills and resources, which withhold them from spreading.

AI facilitating cybersecurity

Despite ongoing threats, generative AI systems are also supporting cyber defenders. Kaspersky explains that "AI and ML have long played a crucial role in defensive cybersecurity, enhancing tasks like malware detection and phishing prevention."

AI-empowered data analysis for the protection of the FinTech industry

A notable example is the community-driven initiative on GitHub, featuring over 120 Generative Pre-trained Transformer (GPT) agents dedicated to cybersecurity. These agents contribute to a collaborative effort, utilizing generative AI for various security applications. In addition, there are specialized tools helping analysts extract security event logs, compile lists of autoruns and running processes, and proactively hunt for indicators of compromise.

Furthermore, some LLMs used in reverse engineering can decipher complex code functions, whereas AI-powered chatbots can create diverse scripts for threat analysis and remediation.

Read also: Novel AI Anlas Token - What Is It?

On top of that, the integration of AI-based chatbots by cybersecurity teams simplifies access to public threat data.

Kaspersky’s 2024 predictions for AI in the realm of cybersecurity

The Kaspersky team expects the development of LLMs to expand the attack surface and create more complex vulnerabilities. Uncensored AI generators can also be leveraged by criminals for fraud and scams, generating more convincing and sophisticated fraudulent content, such as landing pages. Automated cyberattacks and more effective malware created with the help of AI are anticipated in 2024.

Despite the continuous development of AI, Kaspersky does not expect particularly significant changes in the threat landscape next year.

Meanwhile, cybersecurity teams will continue leveraging AI-based tools and may soon count on the emergence of comprehensive AI cybersecurity assistants, for instance, "an assistant to cybersecurity professionals based on LLM or an ML model, capable of various red teaming tasks that range from suggesting ways to conduct reconnaissance, exfiltration, or privilege escalation in a potential attack to semi-automating lateral movement." However, at press time, such digital assistants were just science fiction.

At the same time, such tools, which are not yet available, will require stringent regulation as they may raise ethical concerns, such as the potential for misuse by malicious actors. However, the current trend involves great fragmentation of the global AI regulatory landscape preventing nations from creating a unified security framework.