The integration of Generative Artificial Intelligence (AI) into election campaigns has redefined the very architecture of political communication and persuasion. As AI becomes deeply intertwined with the campaign process, its role extends beyond strategy to shaping voter perceptions. It is introducing unprecedented precision, scale and personalisation in how campaigns engage with voters and influence public opinion across digital platforms.
Campaign to voter communication
The emergence of Gen AI has heralded unprecedented changes in campaign-to-voter communication within contemporary electoral politics. As detailed by Florian Foos (2024), Gen AI offers significant opportunities to reduce costs in modern campaigns by assisting with the drafting of campaign communications, such as emails and text messages. A primary use case for this transformation is the capacity of multilingual AI systems to facilitate direct, dynamic exchanges with voters across linguistic and cultural boundaries.
The Bhashini initiative, first introduced in India on December 18, 2023, is a prime example of this use case. Prime Minister Narendra Modi used this tool on this date during his address at Kashi Tamil Sangamam in Varanasi to translate his speech to Tamil live. With the integration of AI-driven communication tools, campaigns can now fundamentally alter conventional interaction paradigms, moving from broad mass messaging toward more personal, innovative and highly targeted forms of digital outreach.
The disruptive potential of AI in this domain is significantly amplified when campaigners can access individual-level personal contact data. AI-powered messaging tools can generate and deliver personalised content at scale, raising the possibility of both positive engagement and concerning intrusions into voter privacy. Notable examples from recent electoral practice include the widespread use of AI-generated fundraising emails in United States campaigns, as well as the deployment of AI-generated videos of political candidates in India making highly tailored appeals to Indian parliamentary voters. These instances underscore the increasing prevalence and sophistication of dynamic, digital conversations between campaigns and their target electorate.
A defining feature of Gen AI in the campaign context is its versatility as a campaign assistant, capable of providing information, drafting scripts and assisting in both personal and digital communications. The translation of campaign materials, whether text or audio enables unprecedented reach into diverse and multilingual electorates, supporting more inclusive outreach. Moreover, campaigns have utilised AI in training contexts, such as the development of door-knocking bots in Britain, highlighting the versatility of AI beyond conventional media or digital platforms. Empirical studies, including randomised controlled trials in the US, indicate that using simple AI chatbots to converse with voters can increase turnout, particularly when these conversations are informative and directly address practical concerns such as voting procedures or salient political issues.
Nonetheless, the integration of Gen AI into the campaign sphere is not without short-term barriers. Among the most significant challenges is the apprehension among voters and media regarding the legitimacy and transparency of AI-mediated contact. Such scepticism is compounded at the micro-level by campaigners’ loss of narrative control and the well-documented risk of AI hallucination, in which generative models deviate from official messaging or introduce errors.
In the long term, however, Gen AI promises to reshape the scalability of tailored messaging. While the empirical evidence for AI’s ultimate impact on voter mobilisation and persuasion remains mixed, it is clear that automation holds the potential to overcome many of the scalability and manpower issues traditionally associated with large-scale follow-up and sustained campaign engagement. As AI systems mature, their role in supporting, supplementing and transforming campaign-to-voter communication is likely to deepen, raising fundamental questions about the future of democracy, digital ethics, and electoral oversight within pluralistic societies.
AI as a persuasive campaign tool
The evolution of generative AI, particularly large language models (LLMs), is redefining persuasive campaigning in electoral contexts. Emerging evidence indicates that LLMs are now as effective as humans in drafting persuasive campaign messages, with certain studies highlighting their ability to generate compelling content across diverse platforms and languages. These advances parallel best-practice strategies drawn from volunteer-to-voter initiatives, where the capacity to communicate in multiple languages enables campaigns to bridge community divides and engage more heterogeneous electorates.
An instance where such an advantage is useful can be observed in the US, where native-like Spanish language appeals have proven notably more effective in persuading Hispanic voters to support Hispanic candidates than messages delivered in English or by non-native Spanish speakers. This underscores the ability of AI-driven content to resonate at a culturally and linguistically nuanced level, enhancing the reach and impact of targeted persuasion.
Moreover, the automation of tailored audio-visual content production by gen AI systems has facilitated the dissemination of highly customised campaign materials via social media and peer-to-peer messaging platforms. Prominent campaigns now harness these capabilities to produce scalable, data-driven outreach strategies that strengthen their persuasive reach. This, combined with one of AI’s key affordances, recurring follow-ups at a massive scale, enables the cultivation of sustained, trust-based relationships with voters. This fosters deeper social connections rooted in intimacy and authenticity, leveraging the precise and adaptive capacities of AI technologies. As generative AI continues to advance, its role as a persuasive force within the electoral system is likely to expand, presenting both new opportunities and complex challenges for democratic engagement and electoral integrity.
Botnets and social media
The proliferation of Gen AI has fundamentally altered the landscape of political communication, particularly through botnets and the creation of synthetic identities on social media platforms. Botnets, comprising networks of automated accounts, are now deployed with unparalleled efficiency to post, like, share and comment across social platforms at scale. The principal function of these AI-empowered networks is multifaceted; they play a considerable role in spreading disinformation, amplifying divisive or polarising content, suppressing legitimate discourse and manufacturing an artificial sense of widespread support or dissent for specific viewpoints. Such activities distort public debate and threaten the integrity of democratic deliberation.
Modern advancements in AI have rendered bot behaviour increasingly human-like and correspondingly, far more challenging to detect and eliminate from online platforms. The sophistication of these generative models allows for the mimicry of human conversational patterns and emotional cues, thereby circumventing many traditional detection mechanisms.
A related phenomenon is the creation of synthetic identities or fake accounts, convincingly crafted by Gen AI and often equipped with complete, realistic profiles including photographs and biographical details. These artificial personas can infiltrate digital communities, spread misinformation and gather intelligence on political rivals, serving both offensive and defensive functions in electoral politics. The task of identifying and countering such activity now requires increasingly sophisticated detection methodologies and robust countermeasures, underscoring the escalating complexity of maintaining authentic digital discourse amidst the rise of AI-generated manipulation. The persistent evolution of these techniques poses a pressing challenge to the integrity and inclusivity of contemporary democracies.
The 2016 US Presidential election witnessed extensive use of social bots that mimicked legitimate social media profiles to influence online political discourse. These automated accounts amplified polarising content, manipulated trending topics and generated the illusion of widespread support or opposition. By blending in with real users, social bots distorted authentic public debate, shaping voter perceptions and potentially swaying electoral outcomes through coordinated, covert operations embedded within digital conversations.
Regulation and oversight
The regulation and oversight of generative AI in election campaigns have become central challenges for contemporary democratic societies. A collaborative approach involving policymakers, technologists and civil society is essential, with multi-stakeholder initiatives fostering best practices and establishing shared standards for the ethical use of AI. International cooperation is equally vital, requiring the sharing of information, coordinated responses and collective efforts to counteract emerging threats posed by AI-driven manipulation.
Ethical AI development must prioritise fairness, accountability and transparency in system design. This entails comprehensive risk assessments, bias mitigation strategies and the adoption of inclusive design practices, all of which underpin trust and facilitate effective regulatory oversight. Interpretable AI systems enhance transparency and build public confidence in electoral processes. Legal and policy frameworks must evolve alongside technology, with regulations governing data protection, cybersecurity and AI-specific uses exemplified by the General Data Protection Regulation (GDPR) in the European Union (EU).
Governments and international bodies must articulate clear guidelines and standards for AI deployment in political contexts, focusing on principles such as agency, technical robustness, privacy and accountability. For instance, the EU Ethics Guidelines for Trustworthy AI set benchmarks for these core attributes, while separating authentic and synthetic content remains a persistent technical challenge. Technological solutions, including digital watermarking, forensic analysis techniques and AI-driven detection systems, play a pivotal role in identifying and mitigating the impact of automated behaviour by botnets and fake accounts.
Public awareness and education are indispensable; citizens must be adept at recognising deepfakes, identifying bot activity and assessing the credibility of news sources to respond effectively to AI-enabled manipulation. Ultimately, robust accountability mechanisms and adaptive legal frameworks are required to address the rapidly evolving AI landscape and ensure integrity in electoral campaigns.



















Comments