Artificial Intelligence (AI) has once again opened the doors for criminals to find some pretty creative ways to defraud people in the crypto space. OnlyFake, a controversial service using AI to create realistic fake IDs, has been reported to bypass KYC checks on major crypto exchanges. Meanwhile, IBM Security researchers have unveiled "audio-jacking," a new cyberattack method using generative AI to manipulate live audio conversations. On a more positive note, Roblox has introduced an AI-powered translation model aimed at enabling real-time text-based interactions among players speaking different languages.
Crypto Exchanges Face New Threats from AI-Generated Fake IDs
A new service named OnlyFake is creating some serious waves in the crypto and financial services sector by offering to generate realistic fake kyc information such as driver's licenses and passports from 26 countries, including major ones like the United States, Canada, Britain, Australia, and several European Union countries, for just $15 each. This service, which claims to use artificial intelligence "neural networks" and "generators," accepts payment in various cryptocurrencies through Coinbase's commercial payments service. It has also reportedly been successful in passing Know Your Customer (KYC) checks on multiple crypto exchanges.
According to 404 Media, the service even managed to bypass the KYC verification process of crypto exchange OKX using a photo of a British passport it generated, which appeared to be casually laid on a bedsheet. There is even a Telegram group where users share their success stories in using these IDs to beat verification processes at several financial platforms including Kraken, Bybit, Bitget, Huobi, and PayPal.
The pseudonymous owner of OnlyFake, going by "John Wick," confirmed to 404 Media that the IDs could fool checks at major exchanges like Binance, Kraken, Bybit, Huobi, Coinbase, OKX, and the crypto-friendly neobank Revolut using fake KYC information. Despite the alarming capabilities of the service, OnlyFake's website claims that it does not "manufacture forged documents" and states that its templates are intended solely for use in entertainment and illustration purposes.
The process of creating a fake KYC document on OnlyFake is reportedly very quick, taking less than a minute. Users have the option to upload a personal photo or select one from a provided library. The service also offers the capability to generate up to 100 fake IDs simultaneously using Excel spreadsheet data and provides options for spoofing image metadata, including GPS location, date, time, and the device used for taking the photo.
This development naturally raises some serious concerns considering the backdrop of increasing sophistication in technology used by scammers, including AI deep fake tools that challenge the effectiveness of video verification in identity checks.
Read also: mBTC Explained: Understanding the Millibitcoin in Cryptocurrency
Audio-Jacking: AI's New Cyber Threat
Meanwhile, IBM Security researchers have uncovered a new form of cyberattack called "audio-jacking," which uses generative AI and deepfake audio technology to hijack and manipulate live conversations. This technique allows attackers to process live audio from conversations, like phone calls, and alter it based on certain triggers like specific keywords or phrases. In a demonstration, the AI was able to intercept a speaker's request for bank account information during a conversation and substitute the authentic voice with a deepfake one, providing a different account number without detection by the participants.
What Are Deepfakes?
Deepfakes are hyper-realistic digital manipulations of audio, video, and images, created using artificial intelligence (AI) and machine learning techniques. The term "deepfake" is derived from "deep learning," a subset of AI that trains algorithms to recognize patterns and generate realistic content. By analyzing vast amounts of data, deepfake technology can synthesize faces, voices, and movements, making it nearly impossible to distinguish between real and fabricated media. The technology has advanced to the point where it can create highly convincing forgeries of individuals saying or doing things they never actually said or did.
How Deepfakes Pose Security Threats
Disinformation and Misinformation:Deepfakes can be weaponized to spread false information, undermining trust in media and institutions. By fabricating speeches, statements, or actions of public figures, malicious actors can influence public opinion, manipulate elections, and incite social unrest. The ability to create believable deepfakes makes it challenging for the public to discern truth from deception.
Corporate Espionage and Fraud:In the corporate world, deepfakes can be used to impersonate executives or key employees in video conferences, phone calls, or email communications. This can lead to unauthorized access to sensitive information, financial theft, or manipulation of stock prices. Deepfakes can also facilitate social engineering attacks, where employees are tricked into divulging confidential information or performing actions that compromise security.
Cybersecurity Threats:Deepfakes pose significant cybersecurity risks. They can be used to create realistic phishing attacks, where targets are deceived into clicking on malicious links or providing personal information. Deepfakes can also undermine biometric security systems that rely on facial recognition or voice authentication, allowing unauthorized access to secure systems and facilities.
Personal Privacy and Reputation:Individuals are at risk of having their likeness used in compromising or defamatory ways. Deepfakes can be used for blackmail, harassment, or to damage someone's personal or professional reputation. The ease with which deepfakes can be created and distributed online exacerbates the potential for widespread harm.
National Security:On a broader scale, deepfakes can threaten national security. They can be used to create fake diplomatic communications or military commands, causing confusion and potentially escalating conflicts. Deepfakes can also be employed in psychological operations (PSYOPs) to demoralize troops or influence the outcome of critical national security decisions.
Combating the Threat of Deepfakes
Addressing the deepfake threat requires a multi-faceted approach:
- Technological Solutions: Developing advanced detection algorithms that can identify deepfakes is crucial. Researchers are working on AI systems that can recognize subtle inconsistencies in deepfake content.
- Legislation and Regulation: Governments need to enact laws that criminalize the malicious use of deepfakes and protect individuals' privacy and reputation.
- Public Awareness and Education: Increasing public awareness about deepfakes and promoting media literacy can help individuals better identify and critically evaluate suspicious content.
- Collaboration: Collaboration between technology companies, governments, and cybersecurity experts is essential to develop effective strategies and share knowledge on combating deepfake threats.
The IBM experiment showcased how relatively simple it is to develop such a system, pointing out the very powerful capabilities of modern generative AI. According to IBM Security's blog post, the primary challenge was not building the AI system but capturing audio from a microphone and feeding it to the generative AI.
While the execution of an attack like this might require elements of social engineering or phishing, the implications of audio-jacking are massive. Beyond financial fraud, it also poses a great risk as a tool for invisible censorship, potentially altering live news broadcasts or even political speeches in real-time.
Roblox Unveils AI Translation Model
In other AI news, Roblox, the popular online gaming platform, has taken a big leap forward in breaking down language barriers among its global player base. In a recent announcement, Roblox's Chief Technology Officer, Dan Sturman, unveiled the development of an innovative AI-powered "unified translation model" designed to facilitate real-time text-based conversations between players speaking different languages.
This system is built on a large language model (LLM) that operates with a base latency of just 100 milliseconds, ensuring that the conversations happen seamlessly, without the users noticing any delay. The creation of this translation system emerged from Roblox's ambition to transcend the automatic translation of static in-game content, aiming for a more dynamic interaction between its users.
An LLM is a type of artificial intelligence (AI) designed to understand and generate human language. These models are based on deep learning algorithms, specifically utilizing a subset known as neural networks, which mimic the structure and function of the human brain to process complex patterns in data. LLMs are trained on vast amounts of text data, enabling them to perform a wide range of language-related tasks such as translation, summarization, text generation, and more.
How LLMs Work
Training Data:LLMs are trained on extensive datasets that include books, articles, websites, and other forms of written content. This large and diverse corpus allows the model to learn the nuances of human language, including grammar, context, idioms, and the relationships between words and concepts.
Neural Network Architecture:The core of an LLM is its neural network architecture, typically consisting of millions or even billions of parameters (weights and biases). These parameters are adjusted during the training process to minimize errors in the model’s predictions. Popular architectures for LLMs include the Transformer model, which has proven highly effective in handling the complexities of natural language.
Contextual Understanding:One of the key strengths of LLMs is their ability to understand context. Unlike earlier models that relied heavily on predefined rules and patterns, LLMs can capture the meaning of words and sentences in relation to their surrounding text. This contextual understanding allows them to generate coherent and contextually appropriate responses.
Generative Capabilities:LLMs can generate human-like text based on the input they receive. This generative capability is harnessed in various applications, such as writing assistance, chatbots, and content creation. The quality and fluency of the generated text are often indistinguishable from human-written text, making LLMs powerful tools for automated language tasks.
Applications of LLMs
Natural Language Processing (NLP):LLMs are at the forefront of advancements in NLP, powering applications like sentiment analysis, named entity recognition, and machine translation. They enhance the ability of machines to understand and process human language in a meaningful way.
Conversational AI:LLMs are integral to the development of chatbots and virtual assistants. These systems can engage in natural conversations with users, providing information, answering questions, and performing tasks based on user input.
Content Generation:LLMs are used to create high-quality written content for various purposes, including marketing, journalism, and creative writing. They can generate articles, reports, social media posts, and even poetry and fiction.
Code Generation:LLMs have also shown proficiency in generating code, assisting software developers by providing code snippets, debugging help, and even full-fledged programming solutions based on natural language descriptions.
Personalization:By analyzing user preferences and behavior, LLMs can personalize content recommendations, enhancing user experiences on platforms like e-commerce sites, streaming services, and news outlets.
According to Sturman, the main challenges in developing this translator were the design of a system capable of translating between all 16 languages independently and achieving the speed necessary for real-time communication. To overcome these hurdles, Roblox decided against creating individual models for each language pair, which would have required 256 different models. Instead, they developed a single transformer-based LLM capable of handling all language pairs, thereby streamlining the process considerably.
Roblox's approach to training this LLM involved both public and private data sources, alongside collaboration with "expert" translation apps for refining each language's nuances. The model also tackles the challenges of translating less common language pairs by employing "back translation" techniques to enhance accuracy and understandability. Furthermore, Roblox incorporated human evaluators to keep the system updated with the latest slang and trending terms, ensuring the translator remains relevant and effective.
This new translation feature has already shown positive impacts on user engagement and session quality. With over 70 million daily active users from more than 180 countries, Roblox's initiative not only enhances the gaming experience for its diverse user base but also aligns with CEO David Baszucki's vision of interoperability in the metaverse, where users can freely move digital assets across multiple platforms.