How over-reliance on machine learning threatens corporate cybersecurity foundations
The cybersecurity industry is in a turbulent period. As artificial intelligence (AI) becomes increasingly powerful, the threat landscape has been entirely transformed, as hackers, bristling with AI tools, constantly probe organizations for weakness.
In the face of these AI-wielding hackers, cybersecurity professionals have sought to fight fire with fire, using machine learning to detect, neutralize and prevent these threats. This strategy has enabled often under-funded and under-skilled corporate cybersecurity teams to plug the gaps and hold the line.
However, there’s a creeping threat in this new AI-first cybersecurity era — one that risks slowly exposing companies to malicious actors without them even realizing it. As corporate cybersecurity experts clamor to integrate AI as comprehensively as possible, there’s a genuine risk that the teaching and learning of foundational cybersecurity skills falls by the wayside, leading to a generation of under-skilled cybersecurity professionals.
The recent explosion of generative AI tools has been Christmas come early for hackers around the world. You now no longer need to be an expert coder or software developer to pump out phishing campaigns, deepfakes and malware at a rate unheard of only a few years ago, when the sheer volume of threats produced by hackers with AI-enabled tools threatened to overwhelm corporate cybersecurity completely.
AI vs. Hackers: The New Cybersecurity Arms Race
As with any arms race though, innovation from one side breeds an adaptive response in the other. Cybersecurity professionals quickly recognized the oncoming tsunami of phishing, deepfakes and malware, and rapidly rolled out AI-powered defensive tools, such as detectors, watermarks and machine learning predictions.
This has, for now, protected most companies. There have been, of course, several high-profile breaches. But this has been a trickle, rather than a torrent. The flood defenses are holding — for now, at least.
However, the foundation of these defenses is not the AI tools themselves — it's the underlying cybersecurity skills and principles behind them that guide their use. And it’s these foundational skills that are being eroded away by an over-reliance and blind faith in the capabilities of AI to face down the new threats that corporations face.
A root cause of this internal cybersecurity brain drain is one of the greatest promises of AI — “efficiencies.” Corporate decision makers across the world are looking to reorganize their businesses along AI’s capabilities. While no bad thing in and of itself, when applied to cybersecurity, this inevitably means that fewer hard resources get directed where they need to be, and more skilled professionals with years of experience and knowledge are sacrificed at the altar of AI-driven efficiency.
The Hidden Cost of AI Reliance: A Decline in Cyber Expertise
Today’s security analysts, for example, rely on AI tools to flag suspicious login attempts. However, without the required foundational understanding of how algorithms function, they could easily fail to recognize false positives, or the worse flip-side of that, fail to recognize sophisticated threats posing as normal activity. Or incident response teams, a critical part of any organization’s cyber defense, might refer to AI-driven recommendations without fully comprehending the wider context in which an attack takes place. This will lead to incomplete remediation and persistent vulnerabilities.
So corporate cybersecurity is left in a double bind: the more that we depend on AI, the less equipped we leave ourselves to handle its inevitable failures. The solution seems counterintuitive and anachronistic: in an age of AI, the path to organizational resilience is to double down on the human.
Firstly, organizations must reinvest in cybersecurity training. For far, far too long, cyber training throughout the organization has meant a couple of annual compliance procedures where employees are shown what a phishing email looks like.
This is not enough. Whole organizations — from the shop floor, right up to the CEO — need to be taught robust, hands-on cybersecurity skills. Obviously, this includes phishing attempts, but it also means understanding the basics of things like network security, malware and response best practices.
Management at major corporations has been too happy to delegate cybersecurity to the CIO and a siloed team of experts tucked away in a dark, dusty office somewhere. The whole company needs to step up and think critically about what they encounter in the cyber realm.
Ultimately, this leadership must come from the top, with the CEO setting the tone by emphasizing that cybersecurity is a shared responsibility across the organization. C-suite executives must actively commit to upskilling themselves in this critical area, rather than leaving it solely to the IT team.
Secondly, C-suite executives will have to roll out an active strategy to cultivate specialist cybersecurity expertise at their firms. They need to start hiring professionals with deep, domain-specific cybersecurity knowledge that AI can’t replicate — skills like threat hunting, penetration testing and reverse engineering malware, are all critical to organizational resiliency but badly performed by AI. Executives need to implement a comprehensive recruitment and retention strategy to hunt out these experts, bring them under their roofs, and then keep them there.
Leveraging AI to Enhance, Not Replace, Human Expertise
Finally, corporate cybersecurity strategy needs to be based on the principle that AI enhances, rather than replaces human capacities. AI can and should be deployed to speed up repetitive tasks like log analysis. However, there should always be skilled professionals in the loop to review AI-generated insights and to ensure that decisions are made based on all the information available.
There’s an optimum middle ground, where AI at the fingertips of skilled professionals enhances and complements cyber defensive capabilities — currently the pendulum has swung too far in one direction, and executives are happy to neglect conventional cyber skills in the blind faith that AI will solve all.
AI is not a panacea. It is a hugely powerful tool that has the capacity to greatly shore up a company’s cyber defenses. But it also has the potential to blind executives as they are sucked into the AI black hole where little else escapes. Corporate leaders need to wake up and ensure their company falls into the former category rather than the latter — or they’ll likely find their organizations are the next to suffer a high-profile breach.