McAfee study reveals concerns about the impact of AI-generated deepfakes during critical election year
SAN JOSE, Calif.-- McAfee, a global leader in online protection, today released new research exploring the impact artificial intelligence (AI) and the rise of deepfakes are having on consumers during elections. The data, from research conducted in early 2024 with 7,000 people globally, reveals that nearly 1 In 4 Americans (23%) said they recently came across a political deepfake they later discovered to be fake.
The actual number of people exposed to political and other deepfakes is expected to be much higher given many Americans are not able to decipher what is real versus fake, thanks to the sophistication of AI technologies.
Misinformation and disinformation emerged as key concerns for Americans, with the recent incident involving a fake robocall from President Joe Biden serving as an example of what could become a widespread issue. When asked about the most worrying potential uses of deepfakes, election-related topics were front and center. Specifically, 43% said influencing elections, 37% said undermining public trust in media, 43% said impersonating public figures – for example, politicians or well-known media figures – and 31% said distorting historical facts.
“It’s not only adversarial governments creating deepfakes this election season, it is now something anyone can do in an afternoon. The tools to create cloned audio and deepfake video are readily available and take only a few hours to master, and it takes just seconds to convince you that it's all real. The ease with which AI can manipulate voices and visuals raises critical questions about the authenticity of content, particularly during a critical election year. In many ways, democracy is on the ballot this year thanks to AI," said Steve Grobman, McAfee’s Chief Technology Officer.
“The good news is that consumers can take proactive steps to stay informed and safeguard themselves against misinformation, disinformation and deepfake scams. This election season, we encourage consumers to maintain a healthy sense of skepticism. Seeing is no longer believing so ask yourself some questions: What's the source of this content? How reputable is it? Does this video or information seem likely? Go one step further and use AI to beat AI – from robust detection, such as that offered by McAfee’s deepfake audio detection technology, to online protection that uses AI to analyze and block dangerous links on text messages, social media, or web browsers to help protect your privacy, identity and personal information."
Consumers are increasingly concerned about telling truth from fiction.
In a world where AI-generated content is widely available and capable of creating realistic visual and audio content, seeing is no longer believing. Consumers can no longer trust their own eyes and instincts when discerning real from fake news. In fact:
- Nearly 7 of 10 (66%) people are more concerned about deepfakes than they were a year ago.
- More than half (53%) of respondents say AI has made it harder to spot online scams.
- The vast majority (72%) of American social media users find it difficult to spot Artificial Intelligence generated content such as fake news and scams.
- Just 27% of people feel confident they would be able to identify if a call from a friend or loved one was in fact real or AI-generated.
Election season is heating up, and so are audio deepfakes.
As the political landscape heats up during a polarizing election year, so do concerns about deepfake technology. If people can be fooled by AI-generated voice clones of loved ones or celebrities, the possibility of being tricked by AI-generated audio designed to fool people into thinking it comes from a political figure could significantly impact political discourse and election outcomes. McAfee survey results show that:
- In the past 12 months, 43% of people say they’ve seen deepfake content, 26% of people have encountered a deepfake scam, and 9% have been a victim of a deepfake scam.
- Of the people who encountered or were the victim of a deepfake scam:
- Nearly 1 of 3 (31%) said they have experienced some kind AI voice scam (for example, received a call, voicemail or voice note that sounded like a friend or loved one – that they believed was actually a voice clone.
- Nearly 1 of 4 (23%) said they came across a video, image, or recording of a political candidate – an impersonation of a public figure – and thought it was real at first.
- 40% said they came across a video, image, or recording of a celebrity and thought it was real.
How to stay safe and promote information integrity.
- Verify sources before sharing information. Use fact-checking tools and reputable news sources to validate information before passing it along to your friends and family.
- Be cautious of distorted images. Fabricated images and videos aren’t perfect. If you look closely, you can often spot the difference between real and fake. For example, AI-created art often adds extra fingers or creates faces that look blurry.
- Listen for robotic voices. Most politicians are expert public speakers, so genuine speeches are likely to sound professional and rehearsed. AI voices often make awkward pauses, clip words short, or put unnatural emphasis in the wrong places.
- Keep an eye out for emotionally charged content. While politics undoubtedly tough some sensitive topics, if you see a post or “news report” that makes you incredibly angry or very sad, step away. Much like phishing emails that urge readers to act without thinking, fake news reports stir up a frenzy to sway your thinking.
- Invest in tools to help identify online scams. McAfee’s portfolio of products includes innovative protection features, such as McAfee Scam Protection, that detects and protects you in real time from never-before-seen threats and scams – whether that’s dangerous links shared on text, email, search results, or social media. In addition, McAfee recently announced deepfake detection is on the horizon, furthering McAfee's commitment to use AI to fight AI scams and help arm consumers with the ability to detect deepfakes.