Executive Q&A: Navigating AI, Biometrics and the Future of Identity Security

March 11, 2025
Jimmy Roussel, COO at IDScan.net, discusses the evolving threats to identity security, including deepfakes and synthetic identities, and shares strategies for strengthening authentication measures.

As digital identity verification becomes increasingly sophisticated, so too do the methods used by bad actors to exploit security gaps. From biometric authentication and mobile IDs to the rising threat of AI-generated deepfakes, organizations must remain vigilant in fortifying their defenses.

In this installment of our Executive Q&A series, we speak with Jimmy Roussel, COO of global identity verification leader IDScan.net, about the evolving authentication landscape, the growing threat of synthetic identities, and the advanced technologies shaping the future of fraud prevention. Roussel shares insights on best practices, industry trends and how businesses can stay ahead of emerging threats in an AI-driven world. 

SIW: What are the biggest security risks associated with biometric authentication and mobile ID systems, and how can organizations mitigate the threat of spoofing or unauthorized access?

Roussel: Biometric authentication and mobile ID systems are extremely powerful tools for security, as they empower users with easy proof-of-identity. Mobile IDs are secured on the front end, when the user sets them up. This process commonly involves both standard IDV processes (front/back/selfie) as well as two-factor authentication often by a code that is physically mailed to the individual to complete setup of their digital credential.

However, is it important that businesses still operate with caution as there are still risks even with highly secure mobile IDs.

Spoofing and unauthorized access, where fraudsters use high-quality photos, masks or AI-generated deepfakes to bypass facial recognition, could allow fraudsters to compromise the frontend security of the credential. Injection attacks, which insert fraudulent information into the middle of a secure transaction are another ongoing risk. And data breaches are particularly dangerous in biometrics, as compromised fingerprints or facial data cannot be changed like a password, adding a level of permanence to fraud of this kind. Device vulnerabilities also pose a threat, as mobile IDs stored on smartphones are susceptible to malware, phishing attacks, and simple snatch-and-grabs.

To reduce these risks, multi-factor authentication (MFA) is still highly recommended, combining biometric verification with document authentication or behavioural analytics to significantly strengthen security systems. 

Ultimately, while biometric and mobile ID technology offers a convenient, secure method of identity verification, organizations must remain proactive with a layered approach to identity verification. A combination of AI-powered identity verification, acceptance of encrypted mobile credentials, and real-time fraud monitoring is essential to ensure systems remain resilient against evolving threats. 

Threat of Fake Credentials

SIW: With the rise of synthetic identities and fake credentials, how are bad actors bypassing traditional ID verification methods, and what steps can organizations take to enhance their defenses? 

The rise of synthetic identities and AI-generated fake credentials is making it easier for fraudsters to bypass traditional ID verification methods. Synthetic identity fraud, where criminals combine real and fake personal information to create entirely new identities, is now a major issue, with Thomson Reuters estimating that 95% of synthetic identities go undetected during account creation at financial institutions.

These identities often pass basic authentication checks, as they include legitimate data such as stolen Social Security numbers, and fraudsters using synthetic identities have often deeply studied the history of their victims, allowing them to bypass traditional knowledge-based authentication (KBA). 

Fraudsters also exploit AI-powered ID generators found on the dark web to create a high volume of fake ID images. These tools can mimic ID templates and insert forged information, making them difficult to detect with manual inspections or outdated verification systems. Once an AI-generated fake ID is paired with a synthetic identity, it can be used to establish credit, open accounts, and ultimately commit large-scale fraud. 

Although a significant risk, a recent study of 200 fake AI-generated IDs passed through IDScan.net’s proprietary algorithmic verification tests caught 99.6% of fraudulent identity images, suggesting the defenders are currently a step ahead of AI fraudsters. However, we expect to see AI-generated fakes increase in sophistication and accessibility, so systems must also use AI to keep pace. 

AI-powered document authentication can detect tiny inconsistencies in ID images and barcode data. Biometric authentication and liveness detection ensure that an ID belongs to a real person, while real-time fraud monitoring can identify suspicious behavioural patterns that are often hallmarks of fraudsters. 

Although the rollout of REAL IDs is one attempt by the government to introduce stricter standards for receiving an ID, IDScan.net’s analysis of more than 100,000,000 scanned IDs reveals that REAL IDs are just as likely to be faked as non-REAL IDs, and that fraudsters are already quite adept and generating REAL documents that appear legitimate to the untrained eye. 

 AI-Driven Deepfakes

SIW: How are AI-generated deepfakes and synthetic identities challenging identity verification in physical security, and what role does AI play in counteracting these threats? 

We already see AI used to create incredibly realistic fake IDs, manipulate video feeds, and even clone voices. Because of this, AI generates a significant amount of consumer wariness and mistrust around both digital and physical security, something organizations need to tackle head-on.  

Our latest research, conducted in 2024 to measure consumer and business attitudes to emerging identity threats, saw ​​78% of consumers state the misuse of AI as their core fear around identity protection. Equally, 55% believe current technology isn't enough to protect our identities. On the prevalence of AI-generated deepfakes, we found that 70% of consumers encounter deepfaked or AI-generated content more than once per week, with almost 25% coming across it daily. It is plausible, then, that AI-generated material, used for fraud, will begin to pose a greater threat to physical security.  

Even now, old-school methods like visual inspection or simple document scans are just not enough anymore. Criminals can now generate high-quality fake identities that slip past weak security systems, letting them commit fraud, gain unauthorized access, or steal someone’s identity. 

To stay ahead of this, businesses need to fight AI with AI. Advanced identity verification systems use AI to spot the subtle details that give deepfakes away, things like texture inconsistencies, unusual facial movements, or data syntax mismatches. Liveness detection is another key tool, making sure the person presenting an ID is present in real-time, not just an AI-generated image. AI-driven authentication tools can also compare ID photos with live facial scans, catching fakes in the act. 

Deepfake technology is only going to get more advanced, so businesses need to keep evolving their security measures. By using AI-powered fraud detection, real-time authentication, and multi-layered verification, they can make sure their identity checks stay one step ahead of bad actors. 

Industry-Wide Best Practices 

SIW: What best practices should security executives and integrators adopt to ensure their identity verification solutions remain resilient against evolving threats like deepfakes and synthetic credentials? 

To stay ahead of evolving threats like deepfakes and synthetic IDs, security execs and integrators need to follow a few key best practices. First, it's important to use AI-powered identity verification systems that can spot deepfake manipulations and fake IDs. These systems are smart enough to detect minute discrepancies in details on both the identity document and in the presented face. 

It’s also crucial to stay up-to-date with the latest threats and how fraudster methodologies are evolving – your identity verification partner can share crucial insights. Beyond this, regularly updating your systems and tech is essential to keep them effective against new fraud tactics. Testing for deepfakes and synthetic identities should be a constant part of the process, as well as practical training for staff that may come up against existing or emerging fraud attempts.  

Future of ID Verification in Physical Security

SIW: How do you see identity verification technology evolving over the next five years, and what innovations or regulatory developments should security professionals prepare for?

 Over the next five years, we will see identity verification technology evolving significantly, mainly driven by advancements in AI and mobile technology. One of the biggest changes will be the widespread adoption of mobile IDs. As more states launch mobile ID programs, businesses will need technology to interface with these documents, since they cannot be read or verified visually. However, there will still be some roadblocks, as the tech needs to become simpler and more accessible to both businesses and consumers. 

Generative AI is another big factor that will change the landscape. It can be a double-edged sword. On one hand, fraudsters can use it to create highly convincing fake identities, like deepfakes, increasing the risk of fraud for unprepared businesses and unsuspecting customers. On the other hand, AI can also be a powerful tool to fight these fraud attempts by improving detection methods. So, I expect AI-driven solutions to become even more integrated into verification systems, improving how we spot synthetic identities or document tampering. 

But before we get there, there’ll be a lot of focus on AI, mobile IDs, and building better fraud detection systems to new fraudster methods, catalysed in part by AIs increasing accessibility and power. Security professionals need to be prepared for these shifts and invest in adaptable, future-proof tech, protecting themselves, their organisation, and the all-important customer.

About the Author

Rodney Bosch | Editor-in-Chief/SecurityInfoWatch.com

Rodney Bosch is the Editor-in-Chief of SecurityInfoWatch.com. He has covered the security industry since 2006 for several major security publications. Reach him at [email protected].