More

    AI Deepfakes in Politics: A Rising Storm in the Face of Democracy

    When Gail Huntley, a 73-year-old New Hampshire resident, received a pre-recorded message seemingly from President Joe Biden urging her not to vote in a coming primary, she was initially confused. She had every intention of voting for Biden, so why was he discouraging her? Suspecting a fake, she was perturbed to discover that the convincing voice was an AI-generated deepfake. This marked an alarming evolution in political manipulation, and within weeks, the US had made it illegal for robocalls to use AI-generated voices.

     

    This was just the first hurdle crossed in an ongoing struggle between the government, tech companies, and civil society organizations as they debate methods for regulating an information sphere where anyone can seamlessly replicate politicians-cum-puppets with disturbing authenticity. With the imminent elections in the US, UK, and India among others, the democratic process is under grave threat from such artificial intelligence manipulations.

     

    Highly realistic ‘AI fakes’ have already been used to manipulate elections in several countries, leading to concerns among watchdogs that with reducing staff numbers in tech companies, they are poorly prepared to monitor these developments effectively. These digital platforms, a relatively new frontier for political campaigns, are increasingly susceptible to malevolent actors.

     

    President Biden’s interest in the dangerous potential of AI escalated after watching a Mission Impossible movie, where a rogue AI is a major plot point. Consequently, Biden signed an executive order mandating AI developers to make safety test results and related data available to the government. This move isn’t isolated, with the EU also coming close to passing one of the most comprehensive AI regulations. However, the proposed legal framework isn’t slated to be effective until 2026. On the other hand, the UK has been criticized for a lackadaisical response.

     

    These developments in the US hold global implications as the country is the home base for many of the world’s most transformative tech companies. However, experts like former Facebook policy developer Katie Harbath, argue that the measures aren’t enough. There are concerns that the burgeoning Chinese AI industry might outpace them if over-regulation stymies innovation.

     

    Harbath considers 2024 a year to ‘panic responsibly’. She believes that the responsibility of regulating AI-generated content will initially fall upon the very companies creating the tools. While some companies have updated their protocols or banned the use of their products for political campaigns, enforcing these rules is an entirely different challenge.

     

    The crux of the issue is whether AI content outside the hyper-focused context of the US presidential elections is being regulated effectively. A salient example is OpenAI‘s tools being used extensively in the recent Indonesian elections for various purposes, despite the company’s ban on using them for political campaigns. Harbath identifies this as a potential problem area, given the difficulty of enforcing policies outside the US.

     

    At last year’s Slovak national elections, an AI-manipulated recording potentially influenced the election by fueling a scandal. This incident underlines concerns about loopholes in platforms’ policies and the danger of such content on larger platforms, especially during blackout laws in the pre-election period.

     

    Despite these pressing concerns, there’s growing recognition about the public’s resilience against these ‘AI Fakes’. However, Harbath believes that the primary concern lies past primary technology and manipulation tactics, on the unknown threats lurking just out of sight. “New tech happens, new bad actors appear. There is a constant ebb and flow that we need to get used to living in,” she warns.

    Recent Articles

    TAGS

    Related Stories