Citizens must remain AI-aware as Election Day nears

Oct. 30, 2024
Understanding AI’s role in misinformation and influence campaigns is key to protecting democratic integrity.

Artificial intelligence (AI) has become an integral part of our daily lives, influencing the information we consume online, the content published in the media, and even the malicious threats targeting organizations. With just days left until millions of voters head to the polls, it’s crucial that Americans are fully aware of the potential impact of this powerful technology on the election process.

AI has and will continue to be, more prevalent and influential in this year’s election than ever before, especially in the creation of content and news by deepfakes pushing disinformation and fearmongering.

Impacts on In-Person Voting

As the presidential election enters its final stretch, the threat of deepfakes and disinformation campaigns looms large. It's crucial to recognize the potential for AI to sway election results by dissuading voters from the polls.

We saw this play out in May when it was reported that a political consultant had produced an AI-generated robocall that sounded like President Joe Biden, discouraging Democrats from voting. More recently, U.S. officials said a video making the rounds online — depicting Pennsylvania mail-in ballots being destroyed — had been manufactured by Russian actors.

These incidents serve as reminders that AI has the potential to impact the general public’s fundamental right of fair voting based on informed decisions, and it can be used to rapidly push messages tied to real events that are focused on scaring folks away from the polls.

As history has shown, the threat of disinformation will persist well beyond Election Day due to the capabilities of AI. It's important to remember that AI can automate tasks, personalize attacks, bypass traditional security defenses, and inadvertently introduce vulnerabilities. It can also perpetuate false narratives, adapt to defenses, and evade detection measures. These are all tactics that cybercriminals can employ before and after Election Day. Once the election is over, those seeking to sow chaos could use AI to create false narratives that cast doubt on the election results.

Fighting Malicious Activity

This doesn’t mean AI cannot be used for positive applications. In the context of election security, much of the news coverage surrounding AI concerns negative use cases. But just as AI can be used with malicious intent, the technology can also help prevent and reduce the risk of malicious activity. This can be done in several ways, including improving the identification of synthetic posts and messaging, detecting anomalous activity on networks, and countering physical threats to polls.

As history has shown, the threat of disinformation will persist well beyond Election Day due to the capabilities of AI.

Additionally, political parties will likely use AI to support campaigns and create precise messaging for target audiences with a broader reach than ever before. In this case, AI can benefit campaigns looking to set the record straight and fight disinformation.

AI Is Here to Stay

With misinformation and the influence of AI likely to remain an issue on Nov. 5 and beyond, here are a few things everyone — from everyday voters to cybersecurity teams — should keep in mind:

  •  AI has and will continue to influence every area of human life: This technology will truly influence shaping our future beyond Election Day. From facial recognition software to self-driving cars to automated weaponry and robots, AI can support our personal lives, private and public sector organizations, the military, and our national security programs, so it is important to understand how AI can be leveraged.
  •  Remain vigilant every day: This is imperative, especially on major election days, because AI continues to evolve in good and bad ways with positive, negative, or even unintentional intent. Don’t believe your eyes and question everything. Remind yourself that sources matter, and only trust reputable ones you recognize. At the same time, be wary of the onslaught of emails, text messages, phone calls, and social media posts during the election season. A single click could provide bad actors the access they want to take advantage of you or your network.
  • Prioritize AI education: The cybersecurity industry—and, more broadly, all organizations—must educate itself through AI literacy to ensure AI is an enabler instead of a burden, hindrance, or threat. By better understanding how AI works, how it can be used, best practices, and more, we can stay on top of the latest developments and better detect and respond to AI-powered threats targeting everyday Americans and organizations alike.

Staying Vigilant on Election Day and Beyond

It is critical to stay on top of AI-generated threats this election cycle. But talking about AI’s role in deceitful campaigns on Election Day should be viewed as a reminder for everyday scenarios.

It is imperative that all facets, good and bad, of the use of AI are considered. Political parties, malicious actors attempting to influence U.S. politics, and anyone in the general public can use AI platforms to support a political agenda, destabilize this year’s election, sway opinions, or cause chaos. I recommend being vigilant for overly exaggerated or emotion-evoking statements, investigating sources for credibility and veracity, and doing your best to maintain a balanced approach to election or candidate information. This heightened sense of AI-awareness shouldn’t end on Nov. 5.

About the Author

Jessica Hetrick | Vice President of Services at Optiv + ClearShark.

 Jessica Hetrick is Vice President of Services at Optiv + ClearShark. She is a senior cybersecurity leader with over a decade of experience in crisis management, incident response and security operations. Before joining Optiv, Hetrick worked criminal and national security cyber investigations at the FBI, supported operations for digital innovation at the CIA and directed global incident response teams at Cisco. As a strategic leader, she has created, managed and led cybersecurity programs for global companies.