Real words or buzzwords?: AI Model

May 16, 2023
One key factor affects the accuracy, speed and computational requirements of AI.

Editor’s note: This is the 67th article in the “Real Words or Buzzwords?” series about how real words become empty words and stifle technology progress.

AI software has revolutionized traditional security devices and systems, transforming them into intelligent devices and high-performing systems.

These AI-enabled security devices and systems excel in analyzing input data, extracting meaningful insights and quickly transmitting messages or control signals to other devices and security stakeholders.

While non-AI security products may also perform similar tasks, the question arises: What sets AI apart?

The distinguishing factor lies in the AI software running on these devices and systems, marking a significant departure from earlier software generations.

At the core of AI software lies an AI model, a software and data framework that represents or approximates certain aspects of a real-world phenomenon, encapsulating the knowledge, patterns, or relationships learned from data during the AI training process.

The AI model is what gives AI software two critical new capabilities: massively parallel computation and the ability to learn. This model-based approach empowers AI to leverage parallel processing, allowing for vast computational power and accelerated data analysis.

Furthermore, AI software's inherent capacity to learn from training and real-world experiences enables continuous improvement and adaptability, making it a game changer in the field of security.

The Impact of AI

There is a lot being said about the impact of AI in the media, some of it very worrisome.

There is also a lot being said about AI-enabled products in the physical security industry, and the terms “AI” and “intelligence” have been tossed about in product marketing as if AI is a new and special sauce that – when poured on any type of security product – ‘automagically’ makes it a new and wonderful product.

Just because a product contains “machine learning” or “deep learning” capabilities, we’re supposed to consider it significantly better and automatically worth investing in. At least, that seems like the underlying message.

We’ve all experienced the benefits of AI, and we expect AI benefits to “keep on coming.”

According to FedScoop, “U.S. government spending on artificial intelligence (AI) contracts hit $3.3 billion in fiscal year 2022 according to data in a new study published by Stanford University.”

Statista reports that the global total corporate annual investment in artificial intelligence (AI) reached almost $94 billion, and that since 2016, business investment in AI has increased more than sixfold.

We can expect AI to significantly change the security industry, just as it has been changing most other industries. For security practitioners, the most important question is: will it be a game changer for your security program and your organization’s security risk picture?

For security manufacturers, integrators and consultants the question is: will you be able to help your customers and clients make that change?

Making Sense of AI

As I said above, the AI model is what gives AI software two critical new capabilities: massively parallel computation and the ability to learn.

Fundamental to those capabilities is data. AI models are software and data frameworks, and there are many variations of AI models which are designed and built (coded) for different purposes.

The basic concepts and functions of AI software are completely understandable. But because they are so new, different and complex, there are special labels for them and they are not self-explanatory.

On their own, for example, “machine learning” and “deep learning” are not self-explanatory, but – like other AI terms - they can be defined and explained. My article Artificial Intelligence, Machine Learning and Deep Learning does so.

There is a large and growing world of AI terminology, nearly all of which you don’t need to know because the terms relate to AI software architecture and coding – not to the application of AI to security operations or security administration.

But it’s worth understanding the basics of AI models, if only to take away some of the mystery that surrounds AI.

An AI Model

Just as humans learn by training and experience, so does an AI model. AI models are first trained on specific data, then in their deployment environment they gain experience (learn on their own).

Note that in the AI model description below I have linked to Wikipedia articles whose first sentence typically defines the term. It is not necessary to delve into the definitions to read the text below and understand the concept of the model. Just know they are AI technical terms that have their own definition.

Here are the main elements that make up an AI model:

Architecture/Model Type: This refers to the specific type or design of the model, such as recurrent neural networks (RNNs), which are often used for handwriting or speech analysis; convolutional neural networks (CNNs), which are often used to process visual imagery; or transformer models, a deep learning model primarily used for natural language processing – like ChatGPT – and computer vision.

The architecture determines how the model is structured and how it processes and analyzes data.

Parameters: Parameters are the learnable variables within the model that are adjusted during the training process. These parameters define the model's behavior and enable it to make predictions or perform specific tasks.

Training Data: AI models require large amounts of labeled data to learn from. Training data consists of input samples paired with corresponding correct or desired outputs. The model learns to generalize patterns and relationships from this data during the training process.

Training Process: The model undergoes a training phase where it learns from the training data. This involves iteratively feeding the data through the model, adjusting the parameters, and optimizing the model's performance based on a defined objective or loss function.

Inference/Testing: Once trained, the model is used for inference or testing. Inference refers to making conclusions on the basis of evidence and reasoning – or in the case of AI, on the basis of data and computation. In this phase, the trained model is applied to new, unseen data to make predictions or perform the desired task. The model’s learned patterns and relationships are used to make predictions or generate outputs based on the input data.

Output: The output of an AI model depends on its purpose and the task it is designed to perform. It could be a classification label, a prediction, a generated text, an action, or any other relevant output based on the model's intended function.

All of this is completely understandable, but unless you are an AI scientist or work with one, you’ll never need the vocabulary required or find it worthwhile to take the time to learn it.

All Models Are Wrong

In 1967, statistician George E.P. Box, commenting on statistical models, wrote, “All models are wrong. Some are useful.”

This is an often-quoted statement. There is even a 2,000-word Wikipedia article about it.

In the context of modeling, a “model” refers to a simplified representation of reality used to understand or predict certain phenomena. However, models are inherently simplifications and approximations of complex things, and they cannot fully capture all the intricacies of the real world.

All models have to be wrong because they can’t represent the thing that they are modeling, or they would be the thing itself. Models serve an important purpose because they represent aspects of a thing in a way that we can work with those aspects to make estimates and perform analyses.

Good models are right in the aspects that matter regarding the intended use of the model, and can be wrong in ways that don’t matter. Only certain parts of the model need to be accurate.

Since AI is model-based software, the degree to which it is accurate (right) is very heavily dependent on the data used to train it. Just like with humans, AI learns based on its training. So, if its training is not right it won’t come to the right conclusions in its data. This a weakness that all AI models have.

However, the good news for security devices and systems is that the traditional approaches we have always taken for evaluating and testing security system software and hardware will work with AI-enabled products.

Identify what you require of it, find candidate products, then test them out. For some AI, that means a pilot deployment that’s sufficient to ensure that it’s training will enable it to learn what it needs to learn about its deployment environment to work the way you want it to.

However, some AI products are simple enough in their functionality that you can test them in the manufacturer’s experience center or check them out at a trade show or an existing customer’s deployment.

AI-Enabled Force Multiplier Effects

A final and important point is that AI-enabled technology brings three force multiplier effects to existing security systems:

•   Increased ROI: AI products augment existing security systems, adding significant new capabilities and enhancing the value of prior security investments.

•   Continuous Improvement: Modern software, including AI, undergoes continuous development, offering ongoing value enhancement over time.

•   Cloud-based AI: Well-designed cloud-based AI systems perform collective learning using technical data received from all customer systems, resulting in improved performance and insights for each individual subscriber that are far greater than could be gained using isolated individual AI systems.

Getting AI-enabled security products to be a game changer for any security program requires upgrading our security risk assessments and risk treatment analysis to develop risk treatment scenarios that take advantage of what emerging technology can do.

That’s something that is covered in the Global Security Operations (GSO) event August 16-17 at the LinkedIn Global Headquarters campus.

Ray Bernard, PSP CHS-III, is the principal consultant for Ray Bernard Consulting Services (RBCS), a firm that provides security consulting services for public and private facilities (www.go-rbcs.com). In 2018 IFSEC Global listed Ray as #12 in the world’s Top 30 Security Thought Leaders. He is the author of the Elsevier book Security Technology Convergence Insights available on Amazon. Follow Ray on Twitter: @RayBernardRBCS.

About the Author

Ray Bernard, PSP, CHS-III

Ray Bernard, PSP CHS-III, is the principal consultant for Ray Bernard Consulting Services (www.go-rbcs.com), a firm that provides security consulting services for public and private facilities. He has been a frequent contributor to Security Business, SecurityInfoWatch and STE magazine for decades. He is the author of the Elsevier book Security Technology Convergence Insights, available on Amazon. Mr. Bernard is an active member of the ASIS member councils for Physical Security and IT Security, and is a member of the Subject Matter Expert Faculty of the Security Executive Council (www.SecurityExecutiveCouncil.com).

Follow him on LinkedIn: www.linkedin.com/in/raybernard

Follow him on Twitter: @RayBernardRBCS.