OpenAI has reported an increase in attempts to misuse its AI models, including generating fake content aimed at influencing elections. In a statement released on Wednesday, the company revealed that cybercriminals are increasingly leveraging AI tools, such as ChatGPT, to create deceptive content like long-form articles and social media comments.
In the current year alone, OpenAI has thwarted over 20 such attempts, including a series of ChatGPT accounts in August that were utilized to produce articles discussing topics related to U.S. elections. Additionally, in July, the company banned several accounts originating from Rwanda that were generating election-related comments intended for dissemination on the social media platform X.
Despite these efforts, OpenAI emphasized that none of the attempts to sway global elections gained viral traction or established a lasting audience.
Concerns regarding the use of AI tools and social media for generating and spreading misinformation surrounding elections have intensified, particularly with the upcoming U.S. presidential elections. The U.S. Department of Homeland Security has warned of an increasing threat from nations such as Russia, Iran, and China, which are reportedly looking to influence the elections on November 5 by disseminating fake or divisive information.
Last week, OpenAI solidified its status as one of the most valuable private companies globally following a $6.6 billion funding round. Since its launch in November 2022, ChatGPT has attracted 250 million weekly active users, underscoring its significant impact in the AI landscape.
Related topics:
U.S. Department of Justice’s Plan to Challenge Google Raises Concerns Over AI Advancements
Adobe Launches Free App to Empower Creators Amid AI Revolution
Pioneers of AI Revolution: John Hopfield and Geoffrey Hinton Awarded 2024 Nobel Prize in Physics