As law enforcement and other government agencies across the U.S. have increasingly turned to video surveillance and advanced analytics to bolster public safety, civil liberties advocates have simultaneously decried the deployment of these technologies as representing an erosion of privacy for citizens. And though many states and municipalities have passed laws governing when, where and how police can use these solutions, there have been few, if any, guidelines handed down by private sector companies that leverage artificial intelligence (AI)-powered algorithms.
Prosegur Security, a provider of manned guarding and security technology installation services, is looking to change that as the company recently issued a new “Responsible Artificial Intelligence Policy” to dissuade those who may want to use these products for nefarious purposes.
As part of this new policy, Prosegur, which operates in 26 countries globally and has more than 160,000 employees, will implement a corporate or local board, depending on where projects that use AI are located, and will also hire a chief ethics officer to oversee compliance. Other requirements established as part of the policy will include:
- Ensuring all projects have respect for human autonomy and that AI systems are designed to “enhance people’s cognitive, social and cultural skills”
- Verify that the development, deployment, and use of AI systems are equitable
- Make sure all processes involving AI development are transparent
- And hold third-party providers accountable to the same standards.
According to Mike Dunn, Chief Technology Officer at Prosegur, the threat of technology being misused is always a possibility in any application and they wanted to get ahead of any potential ethical issues by making sure they had guardrails in place to monitor how its employees leveraged it.
“All technology can be misused,” Dunn says. “It doesn’t matter what the technology is, you never want to see a technology used in a nefarious way.”
Historical Misuse of Security Tech
One of the most famous examples of security tech being misused occurred when the Transportation Security Administration (TSA) installed the first full-body scanners at airport security checkpoints, as there were well-documented incidents involving screeners inappropriately using the portals to leer at the bodies of passengers. As a result, the TSA replaced the highly detailed images generated by the scanners with stick figure diagrams that simply showed where on someone’s body a prohibited item might be located.
“It shows you how something so inherently good can be used for bad (purposes) and AI is no exception to that,” Dunn adds. “We really wanted to be proactive with this and get out in front and make sure that we are using this technology for exactly what it is intended to be used for, no matter what the analytic is.”
Facial recognition, which has come under intense scrutiny in recent years, is another example of an analytic that can provide tremendous security benefits while also being ripe for abuse. In the event of terrorist act, facial recognition can help authorities identify and track the movements of suspects without having to comb through hours of video footage. Conversely, it has also been used in some cases to discriminate against ethnic minorities and crack down on political dissent.
“We use facial recognition with some of our customers, so we want to make sure that our people are using it to look for, for example, known criminals and not profiling or singling out people and there is no other way to do that than to have a policy in place to make sure the agents are aware of that,” Dunn says. “Second is to closely monitor how it is being used.
“We have what we call our IISOC – integrated international security operations center – a monitoring center that a lot of public companies have, and we heavily monitor our agents,” he continues. “We have a floor supervisor, and they are watching anywhere from three to five people, so they are constantly going around, watching their screens, and offering help – which is, more often than not, what happens with this. Then we have shift supervisors who help watch not only the agents themselves, but the floor supervisors. Also, in our monitoring centers, we have cameras on everything and on every incident that is pulled through we randomly review a small section of that on top of everything as well. We want to make sure that people are logging into a camera for the right reasons.”
Potential Disciplinary Actions
Although the policy is still being developed, Dunn says they are heavily focused on educating everyone internally about what it will entail and why it is being implemented. Once it has officially been rolled out companywide, Dunn says those found to be in violation will face a range of disciplinary actions – anything from HR instruction on what they did wrong for a minor infraction, all the way up to suspension or possible termination for major infractions.
With regards to the company’s partners, Dunn says that Prosegur is currently using third-party analytics rather than an in-house offering and that if they found one of these technology providers were running afoul of the policy they would “cease and desist” using their products immediately.
“In every industry there are always a couple of bad apples. It doesn’t matter if is a police department or a Fortune 500 company, there is always a chance to have a bad employee,” he says. “We are here to be a security company, not a security vulnerability.”
Joel Griffin is the Editor of SecurityInfoWatch.com and a veteran security journalist. You can reach him at [email protected].