OpenAI Disrupts 5 Operations Using its AI for Deceptive Influence

AI-driven fraud has increased by 20% year-over-year, but OpenAI is taking action against deceptive activities and financial institutions abusing its technology.

OpenAI recently disrupted several covert influence operations using its technology to manipulate global public opinion by generating social media content, running bots, and managing websites. In finance, AI fraud is also rising, and sophisticated scams are challenging traditional detection methods. AI-driven fraud has surged by 20% year-over-year and is projected to be a $100 billion issue. Things can become even worse as many experts estimate that AI is only in its infancy, and still has a lot of room to grow.

OpenAI Takes Action

Artificial intelligence firm OpenAI revealed that it identified and disrupted several online campaigns taking advantage of its technology to manipulate public opinion globally. On May 30, OpenAI announced that it disrupted five covert influence operations that used its models to support deceptive activities across the internet. These bad actors used AI to generate comments for articles, create names and bios for social media accounts, and translate and proofread texts.

One of these operations, “Spamouflage,” used OpenAI to research social media and generate multilingual content across platforms like X, Medium, and Blogspot to manipulate public opinion or influence political outcomes. The operation also used AI to debug code and to manage databases and websites.

Another group, “Bad Grammar,” targeted Ukraine, Moldova, the Baltic States, and the United States, using OpenAI models to run Telegram bots and generate political comments.

A third group, “Doppelganger,” used AI models to generate comments in multiple languages, including English, French, German, Italian, and Polish, and ended up posting on platforms like X and 9GAG to manipulate public opinion.

OpenAI also disrupted the “International Union of Virtual Media,” which used AI to generate long-form articles, headlines, and website content that got published on their linked website. Additionally, a commercial company called STOIC used AI to generate articles and comments on social media platforms like Instagram, Facebook, X, and some other websites associated with the operation.

The content posted by these operations focused on a very wide variety of things, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, as well as criticisms of the Chinese government by Chinese dissidents and foreign governments.

Ben Nimmo revealed that this was the first time a major AI firm revealed how its specific tools were being used for online deception.

Altman Allegations

Interestingly, it was also recently revealed that the founder of OpenAI, Sam Altman, was reportedly dismissed from the company for allegedly withholding information from the board, according to former board member Helen Toner. The AI development firm’s CEO allegedly lied to board members before they decided to fire him.

During a Ted AI podcast episode that was published on May 28, Toner claimed that Altman made it very difficult for the board to do their job by withholding certain information, misrepresenting company activities, and in some cases, even just outright lying. As an example, Toner shared that Altman once did not inform the board members about the release of OpenAI’s ChatGPT, which they ended up learning about on Twitter in November of 2022.

Altman was ousted from the board and dismissed from his role as OpenAI’s CEO for a while in November 2023 for being “not consistently candid in his communications with the board.” However, the decision faced a lot of backlash from the company's employees. In fact, 505 out of 700 staff members signed a letter demanding the board’s resignation. After this, Altman was reinstated in just a few days.

Toner also revealed that Altman hid his ownership of OpenAI’s Startup Fund from the board. The OpenAI Startup Fund was founded in 2021, and is a $175-million venture capital fund that invests in AI, technology, healthcare, and education companies to create a positive global impact.

According to Toner, Altman did not tell the board about his ownership, despite very often making it clear that he was an independent board member with no financial interest in the company.

According to a Mar. 29 filing with the United States Securities and Exchange Commission (SEC), OpenAI has since changed the fund’s governance structure, so it is no longer owned or controlled by Altman.

AI-Generated Fraud in Finance

In the world of finance, AI can be called both a tool and a source of new problems. While it brings with it innovation, productivity, and efficiencies for companies, it also introduces some very sophisticated challenges that many financial institutions are not prepared to tackle.

Additionally, the rise of more accessible AI tools has also made it difficult for financial institutions to identify and segregate AI fraud from other types of fraud, leaving a blind spot in their systems and making it challenging to fully grasp the scope and impact of AI-driven fraud.

Ari Jacoby, an AI fraud expert and the CEO of Deduce, explains that the combination of legitimate personal identifiable information with socially engineered email addresses and legitimate phone numbers makes detection by legacy systems almost impossible. Naturally, this difficulty in detection makes preventing and remediating major fraud drivers exceptionally hard, especially as new types of fraud emerge.

The challenge with solutions is that technology is advancing very rapidly, and so is the skill set of those committing AI fraud. This is why it is so important for financial institutions to stay ahead to understand where AI comes into play in cases of fraud.

The first step in implementing solutions is to analyze the online activity patterns of people and groups to identify fraudulent actions that might appear legitimate. Legacy fraud prevention methods are certainly not sufficient any more, and financial institutions have to become relentlessly proactive when it comes to preventing the rise of AI-generated fraud.

Jacoby suggests that a layered program is needed to pin point existing fraudsters and to prevent new fake identities from infiltrating. By layering solutions, using massive data sets to identify patterns, and more accurately analyzing trust scores, AI-driven fraud can be better mitigated.

Financial fraud teams are increasingly categorizing previously low-risk activities as medium risk and taking additional steps to prevent fraud across all stages of the customer life cycle. The threat of AI fraud is being taken seriously as it is one of the major issues plaguing the financial industry, and we are only at the beginning stages of how advanced this technology will become.

Fraud has surged by 20% year-over-year, with the rise of AI greatly increasing the prevalence of synthetic identities. AI-driven fraud is the fastest-growing aspect of identity fraud today and is projected to be a $100 billion problem this year.

Beyond traditional financial institutions, AI-generated fake IDs also have the potential to reshape crypto exchange KYC measures and cybersecurity as a whole. The issue is big enough that it is actively being addressed by regulators.

On May 2, the United States Commodity Futures Trading Commission (CFTC) Commissioner Kristin Johnson advanced proposals for the regulation of AI technologies in U.S. financial markets, including heightened penalties for those who intentionally use AI for fraud, market manipulation, or evasion of regulations.

Today’s AI Like Early Internet

Even though AI already poses a major threat, things are certainly not going to get better. According to Clara Shih, CEO of Salesforce AI, the current state of AI development is comparable to the early days of the internet. At Viva Tech Paris 2024, Shih compared today's AI landscape to 1988, when the internet was in its infancy.

She believes that while the first wave of AI automates mundane tasks, the technology's maturation will lead to a whole new world of possibilities. Shih also suggested that society is on the brink of huge AI advancements, very similar to the emergence of the World Wide Web.