The AI arms race: striking a balance between innovation and cybersecurity

May 30, 2024
Radware’s director of threat intelligence, Pascal Geenens, weighs in with his thoughts on AI and its impact on the cybersecurity community in this Q&A with SecurityInfoWatch.com.

The relationship between AI and cybersecurity is complex. On one hand, there is optimism about the promise of AI in building stronger cyber ecosystems and more sophisticated detection and mitigation solutions. 

On the other hand, there is the unvarnished reality around the use of AI for cybercrime. In the middle lies the challenge all organizations will face in striking the delicate balance between promoting AI-driven innovation and security while ensuring the ethical standards around its use.

Radware’s director of threat intelligence, Pascal Geenens, weighs in with his thoughts on AI and its impact on the cybersecurity community in this Q&A with SecurityInfoWatch.com. With more than 25 years of experience in information security and technology, Pascal has developed strong expertise in tracking cyber adversary groups.

SIW: Threat actors are increasingly using generative AI to enhance their tactics. How might threats evolve as AI capabilities continue to progress?

PG: As with any statistical model, even a basic one like linear regression, classification, prediction or generation accuracy improves with more and ‘cleaner’ data. Vast amounts of ‘good’ data are required to enable generic models, such as those based on neural networks, to perform well.

However, machine learning models that are modeled closer to the problem can adapt their architecture and parameters more flexibly and effectively even with limited amounts of training data compared to a model such as GPT, which is trained only every so often.

The importance of a global AI watchdog promoting ethical use of AI technology cannot be overstated.

Applying AI to automate vulnerability and penetration testing of online applications is not new. Tools have been available and working with varying levels of success for several years. They have proven to be more capable than the current generation of GPT in developing new malicious payloads or finding vulnerabilities in web applications and APIs.

DeepExploit, for example, which was presented at Black Hat in 2018, leverages reinforcement learning, while DeepGenerator leverages Genetic Algorithms and a Generative Adversarial Network (GAN) to generate new payloads that can breach online applications.

These tools have been effective in automated pen testing, provided they have unabridged access to the application and the application generates rich enough error messages, so the model has the information needed to progress its search. The issue for malicious actors is that these tools are very noisy. They generate a lot of random activity before they become even remotely effective and will be discovered as soon as web application and API protections detect their stochastic behavior.

While generative AI, in its present state, has limited use for threat actors besides increasing their productivity, other AI technologies might get a renewed boost from all the attention on recent advancements in the field of AI.

SIW: What cyber technologies or techniques can the cybersecurity community use to remain a step ahead of state-backed AI threats?

PG: We are in an AI arms race. You can think about it as the modern version of the nuclear arms race during the Cold War era. Military research and development have access to deep budgets in addition to the means and knowledge to advance new technology at a pace that could outstrip the cybersecurity community.

Even if governments keep to their promise of being ethical in developing new applications and technologies, there is always that rogue player who goes one step too far, forcing the other players to keep up. The importance of a global AI watchdog promoting ethical use of the technology cannot be overstated.

Despite the race to innovate, there are security basics that should be on every company’s radar. Companies can control their threat surface by continuously identifying, assessing, and mitigating vulnerabilities across their digital and physical assets, networks, and human elements.

While it might not be possible to catch every zero-day in the first layer of defense, adequate logging and behavioral detection across the infrastructure should be able to find and alert on suspicious or anomalous activity. AI can be leveraged to get a grip on the vast amounts of event logs and identify only what matters.

SIW: How does the balance between AI-driven innovation and cybersecurity defense impact the competitiveness of companies and the overall state of technology?

PG: Companies could refrain from investing in certain technologies, but that would only make them less competitive than military research organizations and state-backed research facilities. In the end, it would create an imbalance in knowledge and technology, which would not be beneficial for the cybersecurity community and only cause it to fall behind.

Security has always been a technology race. It’s a race to keep up with and secure the innovations needed to drive the business. At the same time, you need to keep up with new tactics and techniques leveraged by cybercriminals. This technological challenge that the cybersecurity community is facing today is not much different than it was in the past, though on a different level and scale. I’m confident the security community will continue to push forward with innovations in detection and mitigation as new technologies and new problems emerge. Honestly, I don’t think there is another option.

SIW: Do you believe restricting AI innovation is important in building a safer and more secure cyber ecosystem?

PG: I do not believe in achieving security by hampering technological innovations. Organizations must be able to defend themselves against threats from foreign and adversarial nations.

In cyber, there are no borders that can be protected, unless the world-wide internet evolves into a splinternet, but that is a different topic. The strength of a nation to withstand cyberattacks will in part be determined by the strength of its organizations to effectively defend against those attacks.

About the Author

Steve Lasky | Editorial Director, Editor-in-Chief/Security Technology Executive

Steve Lasky is a 34-year veteran of the security industry and an award-winning journalist. He is the editorial director of the Endeavor Business Media Security Group, which includes the magazine's Security Technology Executive, Security Business, and Locksmith Ledger International, and the top-rated website SecurityInfoWatch.com. He is also the host of the SecurityDNA podcast series.Steve can be reached at [email protected]