This article originally appeared in the October 2024 issue of Security Business magazine. Don’t forget to mention Security Business magazine on LinkedIn and @SecBusinessMag on Twitter if you share it.
November 2022 changed the technology space by introducing three letters – GPT – which stands for Generative Pre-trained Transformer. OpenAI released its groundbreaking, and pretty much everything else breaking, ChatGPT technology; infusing the technology space with a shot of adrenaline like a large coffee with 100 shots of espresso, gaining more than a million followers in just 5 days.
As we close in on the two-year mark since GPT made that initial splash, we have seen a surge in generative AI and the larger category of Large Language Model (LLM) AI adoption.
The value of these AI models is the sheer volume of data they have been trained with, along with the quality of data used to train the models. The more successful generative AI models have been models that are taught with a defined set of parameters, even if they use open-source data – where the model learns within the boundaries set before it, which is the primary model being utilized in the security industry.
The question is; is it still a Fad, or is it the Future? Personally, it feels like it could be either…and both.
Generative AI in Security Tech
The security industry specifically is seeing the adoption of generative AI across the board. This is evident in everyday tools – from cool graphics at trade shows, to AI transcripts on virtual meetings, to general research. It is also becoming evident as many of the security technologies are embedding generative AI inside their technology for features now, as well as in the future.
Here are a couple of (really cool, I think) ways that the security industry is adopting these technologies for the future.
Generative AI in Video: At ISC West 2024, in the AI Pavilion just outside the show floor, Axis Communications was demonstrating a partnership with Microsoft where a generative AI model was providing a written description of the video scene.
It is common knowledge that security personnel monitoring video can only focus on a particular scene for about 15 minutes before they have scene fatigue. This application involved having the generative AI “view” the scene and provide a written depiction of the scene and the motion that occurred, allowing the operator to reduce fatigue and glean only the important information from the scene.
While Axis was showing this as an initial partnership, the technology also seems like it still has some development to be made before a minimally viable release. Still, it changes the old statement that a picture is worth a thousand words – especially if those words are specific enough to depict a scene in 300 words or less.
Generative AI Coding: One of the coolest and most useful applications I have seen generative AI used for is REKS.ai, a generative AI coding platform designed by the team at Stratorsoft (www.stratorsoft.com and reks.ai). REKS.ai is an internally trained AI platform that uses open-source APIs to code between software platforms.
The primary use case is for REKS to take on integrating information in known databases across security applications – for example, to pull a user database from a visitor management system and enable a group or a specific group or user with access roles in the access control system.
This process would normally take a security tech potentially hours to create or migrate databases; whereas REKS accomplished this task in a matter of seconds.
Another use-case was to integrate a Video Management System (VMS) and an access control system using the existing APIs. Many end-users are still surprised when they find out that the security system(s) they have purchased may actually be integratable, even if the systems are currently disparate from each other.
REKS makes these integrations happen quickly. Depending on the use-case, and the growing database of coding and APIs REKS has access to, the REKS.ai platform is being positioned to be the easy button for end-users as a daily migration of databases between disparate systems and to be a tool for integrators to program integrations between complex platforms quickly.
“We are extremely excited about the potential of agentic, generative AI within the physical security space,” says Stratorsoft CEO Adam Groom. “The use-cases around enabling expertise, operationalizing data, and creating a useful integration layer are unlimited; however, it is very important that these generative AI applications are built as very constrained and useable systems within physical security specifically.”
The functionality of generative AI coding is not a fringe idea but is a core functionality being built into systems from major brands in the security industry.
The Other Side of Generative AI
The security industry is also trying to keep up with generative AI-enabled technologies designed to defeat security systems, which is something that feels like it should be out of a Mission Impossible movie.
Biometric systems have to be upgraded to detect deepfake avatars and stolen voice biometrics – the latter of which takes 50 words or less to create an entire conversational dictionary for a vocal deepfake.
This is a current and potentially ongoing problem that security professionals will have to address based on the criticality of the systems being implemented.