Just days before the Lok Sabha election 2024 results, OpenAI, the prominent artificial intelligence research organisation known for developing ChatGPT, reported on a covert influence operation allegedly designed to sway Indian electoral outcomes. The operation, reportedly orchestrated by an Israeli firm, STOIC, utilised AI models to generate and disseminate anti-BJP and pro-Congress content across various social media platforms.
OpenAI’s report claims that STOIC, a political campaign management firm, ran a network of accounts that posted AI-generated content critical of the ruling Bharatiya Janata Party (BJP) and supportive of the opposition Congress party. This network, active on platforms including X (formerly Twitter), Facebook, Instagram, websites, and YouTube, aimed to manipulate public opinion and influence the election results through deceptive means.
BIG🚨🚨 Many political pandits were shocked to see a spike in Congress social media impressions in the last 6 months.
Psephologists were shocked when they compared Congress's social media presence and ground support (which had negligible ground support).
Today, OpenAI has… pic.twitter.com/xy36MzrKSG
— BALA (@erbmjha) May 31, 2024
The operation, dubbed “Zero Zeno” by OpenAI, targeted audiences in India, Canada, the United States, and Israel, with content primarily in English and Hebrew. The Indian component of the operation began in early May, focusing on generating English-language content to influence Indian voters.
An attempt was made to influence Indian Election (mis)using AI model by generating fake anti-BJP and Pro-Congress opinions on social media.
OpenAI (famous for ChatGPT) busted 5 such covert operators who were exploiting their AI model.
Don't miss this thread!
— The Hawk Eye (@thehawkeyex) May 31, 2024
According to OpenAI, its defence mechanisms identified and disrupted these influence operations, suspending these actors’ use of its AI models before significant harm could be inflicted. This intervention is part of OpenAI’s broader efforts to counter deceptive and abusive activities using AI models over the past three months.
The report categorises the influence operation as part of a broader trend involving similar activities from Russia, China, and Iran. Specifically, OpenAI identified:
A Russian operation named “Bad Grammar,” targeted Ukraine, Moldova, the Baltic States, and the United States.
- Another Russian threat actor, known as “Doppelganger,” focused on content about Ukraine.
- A Chinese network called “Spamouflage” praised China and criticised its adversaries.
- An Iranian actor is supporting Iran and criticising Israel and the US, operating under the International Union of Virtual Media (IUVM).
Reacting to the report, Union Minister of Electronics and Technology Rajeev Chandrasekhar emphasised the seriousness of these allegations. He stated, “It is evident and obvious that BJP was and is the target of influence operations, misinformation, and foreign interference, being done by and/or on behalf of some Indian political parties.” He called for a thorough investigation to expose and scrutinise these activities, highlighting their threat to the democratic process.
The OpenAI report states the potential for AI technologies to be misused to manipulate political outcomes. The Indian government and platforms like X and Meta have taken steps to neutralise these threats. Still, such operations’ complexity and evolving nature make them an ongoing challenge.
This incident raises critical questions about who financed these operations and the extent of their impact on the electoral process. The Ministry of Electronics and Information Technology has been urged to enhance defence mechanisms to detect and counteract such influence operations, ensuring the integrity of the democratic process.
Comments