Paris – The year 2024 is looming as a pivotal juncture for global democracy, marked by elections in over 60 countries, representing nearly half of the world’s population. As the political landscape braces for this critical period, the surge of new technologies, particularly artificial intelligence (AI), is expected to pose a significant stress test on the integrity of political processes.
Termed a “make-or-break” year for democracy, 2024 will witness crucial votes in prominent nations such as India, South Africa, Pakistan, Britain, Indonesia, the United States, and the European Union.
The vulnerability of democratic processes to AI-driven disinformation has already been witnessed in Taiwan’s recent presidential election. Despite a substantial disinformation campaign against Mr. Lai Ching-te, who advocates for Taiwan’s independence, voters rallied behind him. Experts attribute this orchestrated disinformation effort to China, labeling Mr. Lai as a separatist threat.
Platforms like TikTok were inundated with conspiracy theories and derogatory content in the run-up to the vote, originating primarily from Douyin, China’s counterpart to the app, according to an AFP Fact-Check investigation.
The impact of AI on politics extends beyond Taiwan, raising concerns about escalating polarization and eroding trust in mainstream media globally. The sophistication of AI-generated content has progressed to a point where distinguishing fake from real has become increasingly challenging, diminishing traditional detection mechanisms.
Disinformation fueled by AI has been identified as the number one threat by the World Economic Forum (WEF) over the next two years. The potential consequences are dire, ranging from undermining the legitimacy of elections to internal conflicts, terrorism, and, in extreme cases, state collapse.
Groups linked to Russia, China, and Iran are actively employing AI-powered disinformation to “shape and disrupt” elections in rival countries, warns analysis group Recorded Future. The upcoming EU elections in June are anticipated to be targeted, aiming to destabilize the bloc and weaken its support for Ukraine.
Despite the recognition of the threat, governments are grappling with the pace of AI progress. While legislation is being considered, it moves at a slower pace compared to the exponential advancements in AI technology. Initiatives such as the Digital India Act and the EU’s Digital Services Act aim to combat disinformation on online platforms, but skepticism lingers regarding their enforcement capabilities.
China and the EU are developing comprehensive AI laws, but their implementation will take time. The U.S. has taken a step with President Joe Biden’s executive order on AI safety standards, but critics argue it lacks sufficient measures.
In response, tech giants are introducing their own initiatives. Meta insists advertisers reveal if generative AI is used in their content, while Microsoft provides a tool for political candidates to authenticate their content with a digital watermark. However, there’s a paradox as these platforms increasingly rely on AI for verification, sparking concerns about the efficacy of automating the fight against disinformation.
As the world grapples with the impending challenges, 2024 emerges as a critical juncture where the fusion of AI and politics will determine the resilience of democracies on a global scale.