Tech Trends: The Far-Reaching Implications of the EU’s AI Act

Oct. 12, 2023
After breaking ground on privacy regulation with GDPR, the European Union has shifted its focus to Artificial Intelligence, and there is plenty of security tech in the crosshairs

This article originally appeared in the October 2023 issue of Security Business magazine. When sharing, don’t forget to mention Security Business magazine on LinkedIn and @SecBusinessMag on Twitter.


What if I told you that what happens in the next 18 months has the potential to drastically change the security industry for the next five to seven years? Do I have your attention?

Most Tech Trends readers are familiar with the European Union’s General Data Protection Regulation (GDPR), which was put into effect in 2018 and changed overnight how data was transmitted and stored, even for companies that are not located in the EU. The effects have ranged from simple annoyances of accepting cookies on websites to costly fines for non-compliance.

Now, the EU has turned its regulatory attention to Artificial Intelligence (AI) and ALL of its inclusive technologies, data collection, and uses. The proposed EU AI Act (https://artificialintelligenceact.eu) aims to lay out a framework where the risks of AI systems can be mitigated with the outcome of building trustworthy, ethical AI systems and to protect the fundamental rights of EU citizens.

Details of the EU AI Act are still being debated, with only the primary frameworks having been passed by both the EU Parliament and the EU Council; and as of the writing of this article, neither are in complete agreement and neither has been ratified as a regulation. But if and when it does, the ripple effects will be felt as much here in North America as they will in Europe.  

Security Technologies Deemed ‘High-Risk’

AI, from rules-based systems to quantum computing models, has become a buzzword, both in the security industry and the broader tech world. In fact, end-users have come to expect that AI is in almost every product and service provided by the security industry, resulting in products that do not incorporate AI having a high likelihood of being initially disregarded.

The significance of the proposed EU AI Act is that it assigns AI to one of three risk categories based on the potential risk the systems present: Unacceptable Risk Systems, High-Risk Systems, and Limited Risk/Low-Risk Systems. Generative AI applications such as ChatGPT have found their own sub-category within these categories, and while Generative AI was originally labeled as High-Risk, recent lobbying efforts have paid off to have this moved to the Limited Risk category for now.

Here is the real twist for the security industry. The first category for Unacceptable Risk under the EU Parliament’s version bans all systems that include real-time remote biometric identification in public spaces where AI can identify people – regardless of whether or not the purpose may be beneficial. This can include biometric readers on the exterior of a building as well as many of the video analytics that use a form of AI for classifying.

The second category for High-Risk is subject to strict regulations and protective measures. AI systems here include any with potential for discriminatory use based on gender, ethnicity, or other protected characteristics, which includes real-time and forensic use of biometric data to identify a natural person. There is an argument, although with limited merit, that this could include ALPR systems.

Low-Risk systems include chatbots, deep fakes, video games, and spam filters that can be used without adverse effects.

“Most of the recent innovations in the security industry stem from AI-driven technologies, enabling dramatic improvements to access control, screening/detection and other security systems, and increasing their value for protecting businesses and consumers, and bolstering public safety in the EU,” says Jake Parker, the Security Industry Association (SIA)’s Senior Director of Government Relations. “AI should be safe, lawful, and in line with EU fundamental rights; however, we are concerned that proposals under consideration in the AI Act could have the unintended consequences of prohibiting outright beneficial uses of these products, or restricting them in ways that limit societal benefits.”

Staggering Proposed Penalties

The EU AI Act also brings significant legal and financial risks to companies. In the United States, we have seen firsthand how such risks have limited the use of biometrics, even without technology bans.

Two illustrative examples in the U.S. would be the Illinois Biometric Information Privacy Act (BIPA) of 2008 and the Texas Capture or Use of Biometric Identifier (CUBI) Act of 2009. The penalty in Illinois is $1,000 per incident, which has resulted in hundreds of class action settlements, some in the hundreds of millions and with more to come. Doug OGorden, with BIPAbuzz.com, notes “a staggering 1400% surge in BIPA lawsuit filings since 2019. Among the 2,000 ongoing cases, 88% (1,900) address physical access/time & attendance, while the other 100 involve facial image misuse.”

Though no enforcement actions have been completed so far, CUBI fines can be up to $25,000 per violation. Due to litigation risks and lack of clarity on the requirements in many cases, both laws have caused some companies to refuse to provide or deploy biometric technologies in either Illinois or Texas.

As for EU laws, financial penalties can be staggering. Google was fined 50 million Euro in 2019 for GDPR violations, and potential penalties for violations under EU AI Act could be significantly larger. The AI Act penalties have been broken up into three categories of violations*:

  • Placing prohibited AI systems in the EU may result in administrative fines of up to 40 million Euros or up to 7% of a company’s total worldwide annual turnover, whichever is higher.
  • Use of High-Risk AI systems without data governance or having violated transparency requirements may result in administrative fines of up to 20 million Euros or up to 4% of a company’s total worldwide annual turnover, whichever is higher.
  • Any other violation of the act may result in administrative fines of up to 10 million Euros or up to 2% of a company’s total worldwide annual turnover, whichever is higher.
    *For small to medium-sized business (SMB) fines would be up to ½ of the listed fine per category, or up to half of the listed percentage per category; 3%, 2%, and 1% respectively. 

These requirements would not only apply to manufacturers or end-users; in fact, they would create distinct categories for providers, users, importers, distributors, manufacturers, and authorized representatives of providers. While applicability is limited to systems being used in the EU, it would also apply to those who have a physical presence outside the EU, if the output of those systems is intended for use in the EU. Thus, the law has the potential to be even farther reaching than GDPR.

Awareness is Key

This article is not meant to raise undue alarm but to bring awareness to the fact that potential regulation through the AI Act will force the security industry globally to re-evaluate how AI and its inclusive technologies are used. There is a real need for ethical AI, and for safeguards and best practices to ensure AI-driven technologies are not used for harmful purposes.

There are two major problems that the security industry faces currently with this proposed Act. First, the EU AI Act currently sets the stage for a blanket ban on certain AI systems vs. targeting unacceptable and unethical AI practices. The second is that aside from a few manufacturers and the Security Industry Association (SIA), who are actively involved in affecting appropriate changes, the security industry appears to be oblivious to this Act, despite its potential far-reaching impact.

Jon Polly is the Chief Solutions Officer for ProTecht Solutions Partners www.protechtsolutionspartners.com, a security consulting company focused on smart city surveillance. Connect with him on linkedin: www.linkedin.com/in/jonpolly.

About the Author

Jon Polly

Jon Polly is the Chief Solutions Officer for ProTecht Solutions Partners (www.protechtsolutionspartners.com), , a security technology consulting firm that works with smart cities and corporations to bring business intelligence and public safety through security IoT applications. He has worked as a Project Manager and System Designer for City-Wide surveillance and Transportation camera projects in Raleigh and Charlotte, N.C.; Charleston, S.C.; and Washington, D.C. He is certified in Critical Chain Project Management (IC3PM) by the International Supply Chain Education Alliance (ISCEA). • (704) 759-6837