AI and Access Control, Perfect Together?

Dec. 9, 2024
Rob Rowe, the head of the HID AI Lab, shares his valuable insights and cautious perspectives as the security industry continues to harness AI's potential

Integrating artificial intelligence (AI) into security systems is a journey filled with optimism for its capabilities and wariness of the unknown. Many are yet to fully embrace this exciting technological revolution in electronic access control, and the mainstream public is choosing to tread cautiously.

At present, a mere 22% of end users are utilizing AI to enhance the accuracy of threat detection and prediction in their security programs. Of these, 44% leverage AI for data analytics, as revealed by the recent HID 2024 State of the Security Industry Report. This comprehensive report, based on responses from over 2,600 partners, end users, and security and IT personnel worldwide, across various job titles and organization sizes representing more than 11 industries, also indicates that 35% of end users plan to test or implement AI capabilities in the next three to five years.

In addition to analytics, 11% said they are using AI-enabled RFID devices, 15% are using AI-enabled biometrics and 18% have AI supporting their physical security solutions. Looking to the future, 35% of end users surveyed report they will be testing or implementing some AI capability in the next three to five years.

“AI is certainly a hot topic, and it is good to see the enthusiasm and the natural questions about how it can be applied [within security] and what we should be doing,” says Rob Rowe, Ph.D., who is VP of the AI and Machine Learning (ML) Lab at HID, which has been in existence since 2018. “Looking at that 35%, though, I have to wonder why that number isn’t the remainder of the 22% – so why isn't it 78% … those other folks?”

As Rowe aptly points out, AI's potential in security is vast and diverse. It's not just about enhancing human tasks or identifying anomalies; it's also about using AI for data analytics to uncover trends, patterns, and anomalies that are invisible to the human eye. Furthermore, AI-powered analytics can identify low—and high-risk scenarios and facilitate automated risk-based decision-making, thereby revolutionizing the security landscape.

“Looking at it abstractly, security products, as you know, are always a tradeoff between security and convenience,” he explains. “And you can define curves that relate to the more security, the more inconvenient it becomes for the authorized user, and the more convenient for the authorized user, the less secure it is against the bad guys. What AI does is, rather than riding along that curve, or different points on that curve, it allows you to shift that curve and get greater security and convenience.”

He continues, “AI is an enormous motivation to move the curve instead of just choosing optimum points. And then, of course, there are cost efficiencies, reductions of latency … all those sorts of things that are general across every industry, not just the security industry.”

The following interview looks at what Rowe and his team are researching at the lab, the role of AI in security within the next 5-10 years, as well as his concerns about negative stories surrounding AI – such as recent instances where inaccuracies and false information was provided by Google Gemini, for example – and possible legislation that could arise and hinder progress and research within security and elsewhere as a result of negative press or outcomes.

Q: Please talk about your role at the AI/ML Lab.

Rowe: The role for my team and I is really doing advanced R&D. Most of the projects are on a three- to five-year out time horizon, so we're not driven today by the market trends of the moment. We're driven more by looking at that time horizon and saying, what do we see five years from now and what do we need to do to get in a place to meet that directive?

My team works out of the CTO's office (Ramesh Songukrishnasamy is HID's CTO and SVP of Engineering), so we get involved in different business areas and different internal functions. We get involved with activities at the parent company level, the ASSA ABLOY level, so we move throughout the organization at all levels, partner on projects, and all different applications, including some products. If there’s a business problem and lots of data, we’d love to get involved in those situations.

Q: Can you give an example of a particular product that came out of the lab and is now available?

Rowe: HID just introduced a facial recognition system, a multispectral facial recognition system, which my team developed the earliest versions of, and we were doing the early prototyping. I have been in the biometric industry for several decades. I started a company with multispectral fingerprint sensors and multispectral fingerprint sensors that HID acquired, which was the foundation of the new multispectral facial recognition system.

Q: Expanding on that last question, please discuss what you are working on at the lab and how some of that work will manifest in other new products and solutions.

Rowe: Getting back to the conversation we started with security versus convenience, we are working in areas that increase convenience – something called intent detection. So, being able to understand not only when a person is close to a door, but when they're close to a door and intend to go through it is important to increase convenience and avoid security issues, as you can imagine.

For example, let's say you have a security system that automatically unlocks the door when an authorized person intends to go through it. However, instead of properly recognizing the intention to pass through the door, the system looks at proximity. So every time an authorized person walks down a hallway, all the doors unlock, which is not a very secure system. So, we focus on bringing security and convenience together through sophisticated intent detection.

We are also continuing the journey of fusing mobile devices with physical access systems, which is a very ripe area for work for my team and others. That computer and sensor network that you carry around in your pocket has a lot of valuable information that can be combined with physical access control systems in numerous ways.

Tangential to physical security, although it touches on it, is real-time location services – being able to use an RFID tag, for example, to identify a person or an asset, such as in hospital environments – identifying where doctors and nurses are and where important equipment or even patients are. In that area, we are embarking on deploying state-of-the-art AI methods to increase accuracy, position estimation accuracy, and simultaneously decrease latency.

This touches security in various ways, especially with emergency notification, where you want to know with high certainty and quickly where that person is so you can get the right resources to that area as fast as possible. That's where we're seeing real gains in introducing artificial intelligence and moving away from some of the traditional position estimation methods.

Q: Is that because AI/ML can process all that data faster? And is AI having the most success when there is big data?

Rowe: That's part of it contributes to the latency reduction. However, the other thing that I can do is sort through more data that might be discrepant under classical assumptions.

Many algorithms that are used classically assume RF (radio frequencies) have certain characteristics. That's not the case when you get into the real world with metal girders and infrastructure that distorts the RF signals. Classic assumptions don't work either, so AI can consider them and give you much better accuracy by considering real-world characteristics.

And yes, it is about data volume, so a variety of distinct kinds of larger enterprise organizations within different verticals can really benefit from AI because of the volume of the data they are producing, such as companies with hundreds and thousands of employees, for example.

Q: Can data mining using AI/ML be beneficial on a smaller scale?

Rowe: It depends on the specific instance we're discussing, but AI can certainly help.

I’ll give you an example of data coming from different smaller systems with multiple data formats that you’d like to combine. Historically, you'd have to manually figure out how these data streams can be combined uniformly. Still, now you can use AI to do that sort of massaging of data and be able to create uniform data streams from disparate systems. And in that sense, it can really help with heavy lifting.

Q: How long before security can use AI to create a more predictive and preventative approach to securing buildings, assets, and people to alert us of events before they happen based on prior data and information? Is this already happening, or is it the holy grail?

Rowe: Certainly, people are doing it, not universally, but I think looking forward to time, they'll be more and more. Prediction is a little bit tricky as there's always the unexpected, so if you're planning for some set of scenarios, invariably, another scenario comes along in the real world for which you hadn’t planned. We talked about anomaly detection, where you try to capture all the normal things and then identify what's abnormal. Right now, I think anomaly detection, in general, is one of the most important tools in the AI arsenal.

Q: And does that still involve training on a machine-learning level?

Rowe: Yes, machine learning is absolutely critical, especially based on the use cases and the data volumes used. Training is important for somebody, but today, with so-called foundation models or frontier models, big tech already does training, making it available to others to adapt. This is onerous training, for example, and people throw around numbers like $100 million to train a language model, which most companies can’t do, but by using that existing training model and adapting it to specific purposes, we can apply it.

The other thing that's going on, particularly in the open-source community, is that these foundation models are getting better. They are getting smaller, and they occupy less memory and computational resources, so adopting them and training them doesn't take nearly the amount of data. The training requirements are reduced as these foundation models, cloud services, and open-source models become smaller.

The other thing that's happening is a move to the edge and sophisticated computations occurring at the edge device rather than going back to some cloud service somewhere. We're seeing increasingly the confluence of smaller, more powerful local models with more capable edge-device computational units, the neural processing units. Being able to combine all that together allows us to bring more capabilities to people, and at doing it at the edge has a variety of different benefits.

Q: What are your thoughts on some of the bad press Google was getting for its Gemini release and giving bad information? Do negative stories such as this, or with ChatGPT, Claude, etc., impact people’s trust in AI and using it?

Rowe: Right now, with ChatGPT, Gemini, and Claude, there is a growing awareness for sure, and with that comes a growing concern. The most concrete manifestation of that concern is the regulatory environment, where we follow regulations that different regions adopt, which aren't always well aligned. There are different regulations in different places, touching on various aspects of AI systems, so not only are they evolving in time, but they're different per region. That makes for a complex environment to introduce products. We're thinking about that even in the initial stages, so how do we meet privacy requirements? How do we meet informed consent requirements? How do we meet all these regulatory standards that are coming into view?

Q: Where do you see AI going in the next 5-10 years, especially regarding security and access control? Do you have any big predictions or cautions/concerns?

Rowe: I think routine tasks, tedious tasks in access control, such as some poor person sitting in a room monitoring multiple video feeds … there's no reason why it shouldn't go away, and the technology, if not there today, very soon will make that sort of a routine task. So, with routine monitoring, we'll see increased AI coming in, freeing up people to respond better to those alerts AI generates.

One underappreciated area is the impact of large language models on the user interface. Language models are defining the next user interface; we're entering a new epoch, and we're just at the earliest point.

 As you mentioned, my concern would be bad press. If somebody implements something poorly, such as in the security industry, and somehow it shines a negative light on that technology area, then other companies that implemented it properly are adversely affected by it.

About the Author

Paul Ragusa

Paul Ragusa is senior editor for Locksmith Ledger International, an Endeavor Business Media Security publication.

[email protected]

www.locksmithledger.com