AI has been a hot topic over the past few years for good and for bad reasons. Ethereum co-founder Vitalik Buterin recently warned against the hasty integration of AI in high-value blockchain applications like prediction markets or stablecoins, pointing towards the risks of AI oracles being compromised.
Meanwhile, OpenAI faces regulatory challenges in Italy, where the Italian Data Protection Authority accused it of violating GDPR, after a data breach and subsequent temporary ban of ChatGPT. In contrast, Tencent, China's largest tech company, is embracing AI in response to declining gaming revenues. The company is focusing on AI development with its large language model Hunyuan, aligning with China's plan to lead in AI by 2030.
Vitalik Buterin's Words of Caution
In a blog post from Jan. 30, Ethereum co-founder Vitalik Buterin warned people to be cautious when integrating artificial intelligence (AI) with blockchain technology. He also warned people to be especially cautious when implementing AI in high-value and high-risk applications, to avoid potential pitfalls.
Buterin expressed concerns over the use of AI in critical blockchain applications, like prediction markets or stablecoins, due to the risk of AI oracles being attackable. He warned that a compromised AI oracle could lead to huge financial losses. However, he did still acknowledge areas where AI could be beneficial, like participating in prediction markets on a micro-scale, which would be impractical for humans, and improving user interfaces for crypto wallets. AI could also assist users by explaining transactions, signatures, and detecting scams.
Despite these potential benefits, Buterin advised against relying only on AI interfaces because of the increased risk of errors. He suggested that AI should complement, rather than replace, conventional interfaces. The biggest risk, according to Buterin, lies in using AI to enforce rules or governance in crypto systems. He pointed out that open-source AI models are susceptible to adversarial attacks because malicious actors can study the code to optimize their attacks. Conversely, closed-source AI models, like those used by the crypto startup Worldcoin, offer "security through obscurity" but lack transparency and assurance of unbiased operation.
The greatest challenge, Buterin noted, is creating a decentralized AI using crypto and blockchain technology that other applications can leverage. He described the concept of a "singleton," a single decentralized trusted AI, as the most challenging to implement correctly. Applications like this could enhance functionality and AI safety while avoiding the centralization risks of mainstream approaches. However, he cautioned that there are many ways in which the underlying assumptions of these applications could fail.
OpenAI Faces Scrutiny by Italian Data Authority
The underlying problems in the AI industry are also evident in the fact that the Italian Data Protection Authority (IDPA) has declared that OpenAI, the company behind the AI chatbot ChatGPT, is in violation of the European Union's General Data Protection Regulation (GDPR). This announcement came on Jan. 29, after a comprehensive "fact-finding activity" conducted by the IDPA, which began in November of 2023 to investigate issues related to online AI and data scraping.
The probe scrutinized the adherence of ChatGPT to GDPR provisions and found discrepancies. OpenAI has been given a 30-day window to respond to these allegations. The final decision by the IDPA will take into account the findings of a task force set up under the European Data Protection Framework, which is dedicated to national privacy oversight.
This situation follows Italy's initial decision in March of 2023 to ban ChatGPT, making it the first country to do so. The ban was a reaction to a data breach on the ChatGPT platform that compromised personal user information. Although the ban drew a lot of criticism, Italy agreed to lift it in April 2023, subject to OpenAI meeting certain transparency conditions. Since then, ChatGPT has been available in Italy, but under very close monitoring.
Another big development in Italy's AI regulation happened on Jan. 26, when the city of Trento was fined $54,000 for misusing AI technology in a scientific research project. This fine, a first of its kind for an Italian city, involved the use of cameras, microphones, and social networks in ways that breached established norms.
Looking ahead, Italy, as the 2024 host of the G7 presidency, has very clearly expressed its intent to prioritize AI regulation. Prime Minister Giorgia Meloni plans to convene a special AI-focused session with G7 members before their first leaders' summit in June.
From Gaming Giant to AI Innovator
Meanwhile, others are welcoming AI technology with open arms. Tencent, China’s most valuable tech company, is turning towards AI in response to declining revenues in its gaming division. Historically, gaming has been a big profit driver for Tencent, contributing nearly a third of its earnings.
However, the sector has seen a major downturn. According to a report from The Verdict, the Chinese gaming industry, which made deals worth $16.9 billion in 2018, saw a steep decline to only $158 million by 2023. Tencent, known for global hit games like PUBG: Battlegrounds and Honor of Kings, has especially felt this impact.
Pony Ma, Tencent's CEO, expressed concern over these challenges at an annual corporate event, admitting that the company has fallen behind its competitors in gaming innovation. In light of these struggles, Tencent is now shifting its focus to AI, a field where it will try to catch up with industry leaders. This move aligns with China’s broader goal of becoming a global AI leader by 2030.
Tencent’s AI aspirations are largely pinned on Hunyuan, a large language model similar to OpenAI's ChatGPT. Launched in September of 2023 for enterprise use, Hunyuan very quickly became popular in China, standing alongside models from Alibaba and Baidu.