Forget about what you thought you knew about insidious cyber threats and network intrusions. Temper your angst over Phishing, Trojans, Botnets, Ransomware and Distributed Denial of Service (DDoS). Ease off the meds when it comes to dealing with Wiper Attacks, Intellectual Property Theft, Spyware/Malware and Drive-By Downloads. The bad actors have stepped up their game with perhaps the most potentially devastating cyber ruse of all – the high-tech “Deepfake” videos. And you can rest assured these videos will not be contending for any Emmy awards.
Deepfake videos are a relatively new threat to the cybersecurity world although the concept of altered video has been around for quite some time, particularly in the entertainment genre. Deepfake videos are the residue of new internet technology that supplies almost anyone the ability to alter reality so that subjects can be manipulated to say anything the hacker wants, from the ludicrous and inflammatory to the downright incriminating. Facial mapping and artificial intelligence are the foundation of this powerful new threat that appears so real it is almost impossible to spot the bogus video. The potential security impact of these altered videos has both the federal government and the U.S. Intelligence community on high alert as the mid-term elections approach and the current administration jousts with Russia, North Korea and China.
“This started several years ago with fake videos and then it turned into Deepfake videos and it’s currently progressing to deep portrait videos,” says Bob Anderson, who is a Principal in The Chertoff Group’s global Strategic Advisory Services and a former national security executive and former Executive Assistant Director with the FBI. “This started mainly with social media. Adversaries started attacking rich and famous people, movie stars and others by putting their head on someone else’s body in compromising positions.”
Anderson contends that deepfake videos have progressed into utilizing a “source” individual to mimic real facial movements to include blinking, eye movement and head rotation. This is then overlaid with the 3-D version of the picture of the individual trying to be simulated. With the advancement of artificial intelligence (AI), he says this has made Deepfake portraits extremely realistic-looking impersonations.
“This technology could potentially lead to huge criminal, national security threats to the United States,” adds Anderson.
In a recent interview with the Associated Press, Hany Farid, a digital forensics expert at Dartmouth College, stated that he fully expects to see Deepfake videos make their presence known this fall in the mid-terms and certainly in the national elections two years from now. “The technology, of course, knows no borders, so I expect the impact to ripple around the globe.”
So what is the real danger? The Deepfake video fallout is almost endless. They can range from a politician’s political leanings and policy positions being fabricated to national leaders staged in compromising and incriminating scenarios. From a national defense perspective, just imagine the chaos that would ensue as the result of a military official’s image and speech manipulated into a threatening tirade aimed at a dangerous nation state or if a rogue nation splashed a phony war crime video attributed to the U.S. military worldwide on social media. The U.S. financial security might also take a hit should a staged video create panic on Wall Street. The implications are real and are being strategically implemented by bad actors right now.
“This is a potentially huge national security threat for a variety of reasons. Picture telecommunication calls or video conference calls that an adversary could potentially interject a fake deep portrait video of a three-star general or CEO of a company directing members of that company or organization to partake in potential detrimental national security or criminal actions,” Anderson says. “Nation-states like Russia, China and Iran could potentially utilize this technology for a variety of counterintelligence, corporate espionage, economic espionage and political influence campaigns across the United States.”
In fact, Michael McFaul, the former American ambassador to Russia during the Obama administration from 2012 to 2014, fell victim to Russian disinformation videos that took his facial image and inserted it into salacious photographs, while also taking snippets of his various video speeches, then splicing them into a damning piece of altered statements and manipulated words that seemed to compromise him.
During a recent interview on CBC’s The Current radio broadcast, McFaul said: “A video circulated that suggested that I was a pedophile. What do you say to that? You go on Twitter and argue you're not a pedophile? I mean, there's no excuse for that, no defense, so it's effective. Disinformation is effective. Propaganda works."
The most disarming elements of Deepfake video’s evolving threats are its sudden mass accessibility and the rapidly improving technology that is making it harder for forensic experts to spot a fake. Forensic professionals say that the security industry is reaching a point that without the proper equipment Deepfake videos will be almost impossible to distinguish from the actual videos.
“It has become a frightening reality that Deepfake videos are a clear and present danger. I think any type of current cyber and artificial intelligence threats that could be weaponized by an adversary - whether it’s deep fake videos, sophisticated Malware or ransomware attacks – are significant national security threats to the United States and to private sector companies if not addressed,” says Anderson.
Like any security threat, creating awareness of the risk can ignite solid mitigation policy. Educating senior-level personnel within your organizations that Deepfake videos are increasing in use by bad actors and establishing organizational protocols to address suspected videos may help avert disaster. The implications that Deepfake technology could have on national security has also spurred the U.S. Defense Advanced Research Projects Agency to create a program that is currently developing defense technologies to root out fake images and phony videos.
“One of the easiest things (however, not addressed by most corporations) is to educate and train your employees on modern-day cyber and artificial intelligence threats to your business,” concludes Anderson.
About the author: Steve Lasky is the Editorial Director of SouthComm Security Media, which includes print publications Security Technology Executive, Security Dealer & Integrator, Locksmith Ledger Int’l and the world’s most trafficked security web portal SecurityInfoWatch.com. He is a 30-year veteran of the security industry and a 27-year member of ASIS. [email protected]