Crime As a Service is a Growing Industry and Security Threat

March 11, 2024
AI-generated deepfakes and the need for secure biometrics in enterprises are reaching critical mass

An explosion in online services and platforms presents organizations with a very difficult task: how to be certain that someone is who they say they are online. Add an adversary in the shape of generative AI that creates deepfakes to steal identities and generate new ones and it’s a daunting situation. These sophisticated technologies create considerable security challenges and open a whole new era of risks for organizations that need to onboard and authenticate users to their digital platforms. In this article, we examine the threats posed by AI-generated attacks, particularly in the context of enterprises, and explore how biometric face verification emerges as a robust identity solution for organizations.

The “Crime as a Service" Era

Legacy authentication solutions, once the guardians trusted to secure sensitive information, are now grappling with the unpredictable nature of cybercrime. Today, we find ourselves in the era of "Crime as a Service," (CaaS) where cybercriminal networks operate with alarming sophistication and speed. These networks leverage extensive communication channels, trading profitable tools, techniques and stolen information, including identities, within the dark web.

CaaS and the availability of online tools accelerate the evolution of the threat landscape, by enabling criminals to launch advanced attacks faster and at a larger scale than ever before. If attacks succeed, they have the potential to rapidly escalate in volume and frequency, amplifying the risk of serious financial and reputational damage. Europol, the European Union’s law enforcement agency, has reported some instances where criminal organizations created and delivered tailored deepfakes on request. In one case a threat actor was willing to pay up to $16,000 for the service. These deepfakes are now being created at a quality that makes them virtually impossible to detect without using complex monitoring and analysis tools.

Financial institutions are prime targets for cybercriminals seeking lucrative gains. Traditional authentication methods like passwords and one-time passwords (OTPs) which are often used as a method of multi-factor authentication (MFA) are proving insufficient in the face of these advanced threats. One of the fundamental issues with OTP authentication is that it only meets the ‘what you have’ factor of authentication (otherwise known as the possession factor).

What you have – be it your cell phone or a hardware token – can be lost, stolen, or compromised. Similarly, passwords as an example of knowledge and the ‘what you know’ factor are equally as vulnerable as they can be forgotten, shared, or socially engineered. Consequently, businesses have been prompted to reevaluate their security strategies and explore more innovative and resilient solutions.

What About Voice Biometrics?

One such solution that has emerged as a commonly adopted form of identity verification in financial services is voice biometrics. Chosen for its ability to speed up authentication onto a platform or service and based on the principle that people have their voice with them all the time. However serious questions are being asked about its integrity following a series of recent incidents where cybercriminals have used generative AI to create voice deepfakes to gain unauthorized access to financial accounts. Unsurprisingly voice is now acquiring a reputation as the biometric most vulnerable to compromise. Just one minute of voice is all that’s needed to create a highly convincing dupe.

Cloning isn’t the only weakness of voice biometrics, concerns have also been raised around identity assurance, performance, and the accessibility that it provides to users which most certainly precludes its suitability for large-scale transactions.

Face Facts

Considering the limitations of legacy authentication methods and the vulnerabilities of voice biometrics, facial biometrics provides a far more reliable verification solution. Unlike passwords and OTPs, facial biometrics resolves the usability issues that have plagued traditional authentication solutions by employing inherence, or ‘what you are’, a form of verification by using inherent characteristics to verify your identity. The unique features of the human face are now a powerful tool in the fight against cyber threats. Nobody can steal your face. As such, biometric authentication can offer a more secure method for organizations to verify users against a government-issued ID (which generally uses photos of the face) to validate them during onboarding and enrolments.

The efficacy of biometric face verification lies in its departure from the notion that faces are secret. Faces are inherently public and visible to all. However, the strength of biometrics is in its uniqueness, non-shareability, and immunity to theft.

Biometric data, particularly facial features, is unique to everyone. This forms the basis of secure verification, as duplicating or replicating these features is extremely difficult. Unlike passwords, biometric data is non-sharable and cannot be easily compromised. Moreover, the immunity to theft adds a layer of security. We live in a world where data breaches and identity theft are rampant and biometric face verification provides a safeguard against unauthorized access and identity manipulation.

As organizations navigate the intricate landscape of cyber threats, the significance of embracing biometric face verification becomes increasingly apparent.

As previously mentioned, inherent characteristics, like your face, can’t be stolen but they can be copied. Attackers can present a mask, picture, or a recording of an individual to the camera to spoof the authentication process or digitally inject an AI-generated image or video into the video stream.

This is where liveness capabilities in a solution are crucial. Liveness detection uses science-based technology to verify that an online user is a real live person and that they are authenticating at that moment. What’s important to mention here is that not all biometric solutions have equitable science-based liveness detection capabilities. While many solutions offer some level of presentation attack detection, most cannot detect digitally injected deepfake attacks. Making this distinction is essential when choosing your solution.

Binding Digital Identities to Real-Life Users

One of the major advantages of facial biometric verification with liveness detection in the enterprise context is its ability to bind digital identities to real-world users. This is achieved through a process of matching a selfie image with a government-issued ID. The convergence of these two sources ensures that the digital identity associated with an account corresponds authentically to a legitimate, real-life, individual.

This binding mechanism is crucial in mitigating the risks associated with AI-generated synthetic media such as deepfakes. As AI technologies advance, the creation of realistic yet fraudulent content be it audio, image, or video content becomes increasingly prevalent. By linking digital identities to government-issued IDs through face verification, enterprises can strengthen their authentication processes and thwart attempts to manipulate or create fraudulent digital identities. Furthermore, because the development of AI is continually evolving the defense against it can’t be static which means that constant monitoring of the threats and often testing against the very tools being used to detect them is essential to offer maximum protection.

Navigating the Future of Secure Digital Identity

As organizations navigate the intricate landscape of cyber threats, the significance of embracing biometric face verification becomes increasingly apparent. This technology not only addresses the shortcomings of legacy authentication methods but also provides a robust defense against the escalating risks posed by AI-generated attacks.

Proactive adoption of advanced authentication solutions that evolve at the same lightning pace as cyber threats is a must. By leveraging biometric face verification, organizations can strengthen their defenses all the while ensuring only the right person and the real person can access their services.

Dominic Forrest is the Chief Technology Officer for iProovDominic is responsible for the design and development of iProov’s cloud-based infrastructure. Before taking up his role with iProov in 2013, Dominic was the Senior Vice President of Engineering at mBlox Inc., where he managed a team of fifty developers over three continents, responsible for developing and scaling the platform so that it could seamlessly run over six billion transactions per year.

About the Author

Dominic Forrest | Chief Technology Officer for iProov

Dominic Forrest is the Chief Technology Officer for iProov. Dominic is responsible for the design and development of iProov’s cloud-based infrastructure. Before taking up his role with iProov in 2013, Dominic was the Senior Vice President of Engineering at mBlox Inc., where he managed a team of fifty developers over three continents, responsible for developing and scaling the platform so that it could seamlessly run over six billion transactions per year.