It only took a casual stroll around ISC West the past couple of years to realize how much Artificial Intelligence is impacting security; however, there is a darker side to AI – not the Arnold Swarzenegger type of dark side, but AI is now enabling the manipulation of video footage in a way that is so subtle that it is nearly indiscernible to the average person. It is called a “deepfake,” and the potential far-reaching ramifications are far from fake in a world of video dependency that obviously includes our industry.
According to the Brookings Institute (www.brookings.edu), deepfakes are videos that have been constructed to make a person appear to say or do something that they never said or did. They are created using sophisticated AI technology. The Brookings article cites examples such as videos of candidates in a political campaign manipulated so they appear to say things that could harm their chances for election.
It isn’t hard to make the leap to deepfake surveillance video, is it? In an industry where video footage is often used as evidence, trust is obviously paramount. As the Brookings article points out: “As we become more attuned to the existence of deepfakes, there is also a subsequent, corollary effect: they undermine our trust in all videos, including those that are genuine...We can no longer be sure of what is real and what is not.”
In the end, it is the ultimate threat – a technology that effectively chops out one of the three legs of the technology stool (the others being access control and alarms). We all know how well a two-legged stool works.
While deepfakes are a new and powerful method of video manipulation, luckily this isn’t a new threat for the security industry. Digital evidence protection and chain of custody has been a huge concern for a while; in fact, most leading VMS providers have taken their own unique approaches to digital evidence preservation – and it comes standard with their products. Video footage is encrypted by default and it usually takes a special player to even review the footage.
Still, there are plenty of NVRs and other edge recording devices with none of these digital evidence protections. It is critical for integrators to verify and check individual recording devices, because it is currently not yet an industry standard. Integrators should also be asking for this as a standard from providers such as SIA, ONVIF, etc.
There’s another aspect: Let’s think beyond surveillance and put yourself in your customer’s shoes – a corporate security director, in this case. You open your email and see a strange message containing a very real (looking) video of your CEO, who is saying something she would never utter – something that would cause significant damage to the individual, the company and the brand. And suppose that there is a note that says: pay us bitcoin or this video gets released on your company’s Twitter feed. Move over ransomware.
Deepfakes are still ahead of our time, but not by much. The Brookings article cites research by several universities who are – welcome to the rabbit hole – actually using AI to detect the AI that was used to create digital video manipulations, such as face-swapping, scaling, rotation or splicing.
Let’s hope the good guys stay ahead of the bad guys on this one.
Paul Rothman is Editor-in-Chief of Security Business magazine. Email him your comments and questions at [email protected]. Access the current issue, full archives and apply for a free subscription at www.securitybusinessmag.com.