In a year with elections planned and already completed across the globe—from Venezuela to France to the United States—artificial intelligence (AI) has emerged as a critical focal point in political movements and legislation alike. With AI's rapid rise in the last 18 months, legislators and voters are voicing increasing concerns about its implications for influencing public opinion and shaping election outcomes.
For example, earlier this year, the U.S. Federal Communications Commission (FCC) declared voice cloning technology in scam calls illegal, and they’re set to vote on whether to adopt new rules that would direct callers to disclose the use of AI to consumers.
This reflects a growing recognition of the need for transparency when AI-powered technology is being utilized in large-scale movements, like political campaigns. Such initiatives are becoming increasingly important as AI's influence in advertising and the spread of information continues to expand, with significant implications for the integrity of elections and voters' privacy.
Can Voters Determine AI-generated Advertisements?
New data highlights a significant lack of confidence among U.S. voters regarding their ability to identify AI-generated advertisements. In fact, only one in ten reports feeling “very confident” about their ability to distinguish between ads created by AI and those produced by humans. This gap points to a broader issue: The increasing sophistication of AI technologies makes it increasingly difficult for the average voter to discern what technology generates and what messages are authentic.
Despite reports earlier this year of AI-driven campaigns — such as the AI-generated robocall purportedly from President Joe Biden ahead of the New Hampshire primaries in the U.S. election process — 61% of voters are unsure if they have encountered AI in election campaigns this year. This uncertainty underscores AI's pervasive and often undercover role in political advertising.
As AI systems become more adept at mimicking human-like interactions and generating persuasive content, the challenge for voters in recognizing and interpreting these ads becomes more complex. The implications of this are profound, and a staggering 81% of voters view AI as a threat to election security. This concern is rooted in the potential for AI to be used in ways that undermine the democratic process, such as by spreading misinformation, manipulating public opinion, or creating deep fakes that could deceive voters. The anonymity and scale at which AI can operate amplify these risks, making it crucial for regulatory frameworks to adapt and address these emerging threats.
With elections occurring worldwide this year, raising awareness about the increasing use of targeted political ads and AI-generated content is crucial. Voters need to be informed about how their data is being utilized and how the digital content they encounter affects their decisions.
Elections & The Rise of AI Policy
While elections are happening worldwide, so are AI privacy regulations. The research found that 81% of U.S. voters are now more inclined to advocate for data privacy legislation than in previous election cycles.
As AI technologies continue to evolve and become more integrated into political advertising, the call for regulatory action grows louder. This rise of AI has created a heightened awareness of the need to protect personal data, as AI-driven political ads often rely on extensive data collection and analysis to target voters precisely. The intersection of AI and data privacy raises critical questions about how personal information is used and safeguarded, and whether existing regulations are sufficient to address these challenges.
However, legislators across the globe are faced with the daunting task of crafting policies that can keep pace with rapid technological advancements while safeguarding voter privacy. Here are a few examples of how AI is being considered in elections worldwide:
United States:
Over the past three years, Vice President Kamala Harris, the Democratic nominee for U.S. President, has taken a leading role inside the White House on AI as the technology has taken off. Specifically, she brought the top executives of OpenAI, Microsoft, Google, and Anthropic to the White House to agree on voluntary safety standards for the technology.
She also led a White House executive order mandating how the federal government would use and develop AI and pushed the U.S. Congress to adopt regulations to protect individuals from AI killing jobs and other harms—although little legislation has emerged, and the companies have so far faced few roadblocks.
In tandem with the country’s Presidential election taking place this November, the U.S. is undergoing its second major attempt at comprehensive federal privacy legislation: The American Privacy Rights Act (APRA). Earlier this year the APRA had a scheduled markup with the House Energy and Commerce Committee that would have allowed lawmakers to analyze and amend the bill. However, it was canceled at the last minute – a move largely attributed to Republican concerns about the bill’s private right of action, which they reportedly felt could negatively impact smaller businesses. This means the APRA is at a standstill – potentially until the U.S. Presidential Election on November 5.
European Union:
The European Union’s General Data Protection Regulation (GDPR) has already been in effect for over six years. It aims to guide how personal data is collected, used, transferred, stored, and processed. Companies like Meta have faced fines of up to $1.3 billion for not complying with GDPR and transferring data collected from Facebook users in Europe to the United States.
Like the U.S., the EU has started seeing how AI can impact elections and voter sentiment. Specifically, during France’s parliamentary election there were claims that the far-right party reportedly used AI-generated content focused on hot-button issues like immigration. In total, the National Rally and Reconquest parties collectively published 23 AI-generated images across 81 posts on Facebook, Instagram and X, according to research by AI Forensics.
Venezuela:
There are also examples of how AI is being used positively in elections—like the recent presidential election in Venezuela, which was said to have been plagued by misinformation throughout the campaign process. Efecto Cocuyo is an independent Venezuelan digital news outlet that launched an AI chatbot on WhatsApp called “La Tía del WhatsApp” (The Aunt of WhatsApp).
The first of its kind to be created by Venezuelan media, the launch of “La Tia” was closely followed by a similar model from Cazadores de Fake News, a fact-checking website. Both chatbots use international numbers as a security measure against censorship and data protection.
Intended as a direct antidote for disinformation and censorship, users can send photos, videos, WhatsApp text chains, or any other content disseminated on the internet to be verified by the chatbot. If the submitted content has already been fact-checked by Cocuyo Chequea (the fact-checking division of Efecto Cocuyo) the chatbot will immediately respond with the verified information
The current political climate directly reflects the need for urgent and comprehensive action to address AI's implications in political advertising. As elections approach and AI continues to play an increasingly prominent role, legislators must enact robust regulations that ensure transparency, protect election integrity, and safeguard voter privacy.
The stakes are high, and the time for action is now.