Europe’s ‘AI Act’ goes into force under watchful eye of security industry

Aug. 1, 2024
Enactment of the European Union’s AI Act became official today, and the security industry will be watching as the world’s first major AI legislation is tested against the opportunities and challenges posed by the technology.

Enactment of the European Union’s AI Act became official today, and the security industry will be watching as the world’s first major AI legislation is tested against the opportunities and challenges posed by the technology.

The AI Act provides AI developers and deployers with clear requirements and obligations regarding specific uses of AI. It also seeks to reduce administrative and financial burdens for business, in particular small- and medium-sized enterprises (SMEs).

The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package and the Coordinated Plan on AI. The rules aim to ensure AI systems respect fundamental rights, safety and ethical principles and by addressing risks of powerful and impactful AI models.

Security Industry Concern

AI has been a driving force for technology innovation in the security industry, which traditionally has been behind the adoption curve.

The Security Industry Association, ASIS International and International Biometrics+Identity Association shared a number of concerns about the pending EU AI regs several months ago in a letter to the European Union Commission.

Kara Klein, a spokesperson for SIA, said those concerns haven’t changed, and emphasized that avoiding a sweeping, categorical ban on biometric identification systems, particularly facial recognition technology, is a step in the right direction.

“In the same vein, SIA welcomes the refinement of the restrictions on AI systems used for categorization and other analytics to more specific use cases of concern, and that inherently low-risk applications of biometric technologies for user verification and similar functions are not subjected to high-risk requirements,” the organization has said.

SIA welcomes the refinement of restrictions on AI systems used for categorization and other analytics to more specific use cases of concern, and that inherently low-risk applications of biometric technologies for user verification and similar functions are not subjected to high-risk requirement.

European Commission Executive Vice President Margrethe Vestager said the AI Act is "an important step to ensure that AI technology uptake respects EU rules in Europe."

Specifically, the AI Act creates a certification regime for uses of the transformative technology in "high-risk" applications, such as law enforcement and employment.

The EU hopes that by laying down strict rules relatively early in the technology's development it will address potential dangers in time and help shape the international agenda for regulating AI.

 

AI systems intended for use in high-risk areas will have to meet various standards spanning transparency, accuracy, cybersecurity and quality of training data. Such systems will have to obtain certification from approved bodies before they can be put on the EU market. A new commission body called the AI Office will oversee EU-wide enforcement.

Some AI uses -- such as Chinese-style social credit scoring - will be banned outright.

There are also more basic rules for general purpose systems that may be used in various situations -- some high-risk, others not. For example, providers of such systems will have to keep certain technical documents for audit.

From now on, providers of especially powerful general purpose AI systems must notify the commission if their system possesses certain technical capabilities. Unless the provider can prove that their system poses no serious risk, the commission could designate it as a "general-purpose AI model with systemic risk," after which stricter risk-mitigation rules would apply.

AI-generated content such as images, sound or text would also have to be marked as such to protect against misleading deepfake material.

The maximum fine possible in the AI Act -- for using an AI system for a specific banned purpose -- is up to $38 million or 7% of a company's annual revenue.

Fines for infringements of the AI Act's other legal obligations can be up to 3% of revenue while supplying incorrect information to regulators can be up to 1.5%. The fines would be capped lower for EU bodies that break the rules.

U.S. Regs in Discussion Stage

In Washington, following the executive order issued by President Biden about the safe, secure and trustworthy development and use of AI, there have been highly publicized meetings between Big Tech and the Biden Administration and now a handful of bills under consideration. 

Two of the bills received committee approval this week just prior to August recess, said Jake Parker, SIA's Senior Director of Government Relations. One is U.S. Senator Gary Peter’s Prepared for AI Act, which deals with federal agency use. and Senator John Thune’s Artificial Intelligence (AI) Research, Innovation, and Accountability Act -- which would have broader private sector applicability.

Parker said his understanding is the bills are still in the discussion stage and Congress will likely not take up any sweeping AI initiativers this year.

The National Institute of Standards and Technology is also holding discussions with the public and private sectors to develop federal standards for the creation of reliable, robust and trustworthy AI systems. 

With no federal legislation appearing imminent, many states have been forced to take their own steps. The National Council of State Legislatures reported in June that in the 2024 legislative session, at least 40 states, Puerto Rico, the Virgin Islands and Washington, D.C., introduced AI bills, and six states, Puerto Rico and the Virgin Islands adopted resolutions or enacted legislation.

The state actions ranged from creating a task force to requiring schools and universities to adopt policies on AI use by students and instructors, to providing grants to school district to implement AI in support of students and teachers. Colorado passed a law requiring developers and deployers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination, and the state mandated disclosures to consumers.

dpa GmbH, via distribution by Tribune Content Agency contributed to this article.

About the Author

John Dobberstein | Managing Editor/SecurityInfoWatch.com

John Dobberstein is managing editor of SecurityInfoWatch.com and oversees all content creation for the website. Dobberstein continues a 34-year decorated journalism career that has included stops at a variety of newspapers and B2B magazines. He most recently served as senior editor for the Endeavor Business Media magazine Utility Products.