Over the past 10-15 years, cybersecurity has gone from being a topic of conversation for IT staff in the back office to a business imperative on the minds of top executives. The closer the cyber conversation got to the boardroom; the higher the security team's stress levels became.
Today, cloud, remote work and AI are exacerbating this stress. Meanwhile, ransomware and other cyberattacks are soaring, pushing cyber professionals to the brink of exhaustion. Practitioners find themselves working longer hours and have become increasingly worried about failure – as mega-breaches play out so publicly in the media.
Five years ago, two-thirds of CISOs were considering a job change or leaving the industry entirely. Today, nearly two-thirds (62%) of polled IT security leaders say they’ve experienced burnout at least once; meanwhile, nearly half (44%) have reported multiple instances.
Let’s zero in on one of those stressors, AI, which has dominated business and security conversations for nearly two years but is truly a double-edged sword.
Security teams are meeting new mandates to shoe-horn AI more directly into their day-to-day roles. Mostly, this is an effort to counter surging threats, including more advanced, automated cyberattacks. However, without the proper organizational approach and long-term plan, AI will actually worsen security burnout.
AI Adoption Means More Work for IT
In the face of greater AI adoption and 3.5 million unfilled cyber jobs, the responsibilities of IT staffers are only growing. Now, not only do they have the traditional IT and security tasks to cover – including round-the-clock alarms and fire-drills – they're also responsible for rolling out AI deployments and training. Plus, as adversaries use it to launch more attacks, IT departments may have to push other priorities down the list.
This added security risk, with increasingly limited resources, will demand an unprecedented reprioritization of practitioners’ time. In fact, aside from significant upfront investment, AI implementation requires system integration, database training, and maintenance, along with specialized knowledge and, potentially, a considerable amount of retraining. It could double- or even triple-task current teams.
No matter how you slice it, AI can add layers of management that may slow processes or create new challenges – at a time when 84% of polled cyber professionals cite stress primarily related to workload and project volume. These factors may breed more burnout and even more job openings as security pros decide they simply can no longer take the stress.
But it isn’t all negative.
First, Cover the Basics
Like any technology, AI comes with its share of complexities. But its cybersecurity benefits can be numerous: automating routine tasks, enhancing threat detection, triggering automated responses to common incidents, or simulating attack scenarios. Ideally, AI tools can free up practitioners to focus on high-priority issues, thus reversing burnout.
However, maximizing AI depends on two critical elements: existing security strength and tactical deployment. Let’s consider the first. To spare teams from failure or burnout down the line, they should not immediately over-index on AI security features unless they’re adequately prepared and have met specific criteria. This includes: an effective vulnerability management program, secure endpoints, data encryption, strong identity and access management (IAM) systems, and workable incident response plans.
Without foundational security, AI tools can worsen tool-sprawl and stress, instead of actually solving problems and providing ROI.
Roll Out AI Strategically
For those teams ready to adopt the technology, the key to finding efficiencies and avoiding headaches will be, simply put, careful decision-making. A few key points:
- Don’t get ahead of yourself. AI use cases can be simple – automating lower-risk workflows to incrementally free up analysts’ time. They don’t need to be moonshots that will completely alter operations within the security operations center (SOC). Big-lift projects can put immense pressure on staff, at a time when they’re already stretched thin.
- AI use demands precise communications at the highest levels. Intended AI use (for security purposes or otherwise) and planned rollout should be clearly communicated across the organization. No one should be left in the dark on ways the technology will be utilized. As such, organizations should define an AI governance process that provides sufficient guardrails – with clear and codified policies. Plus, any new expectations for security staff should be realistic and clear, and ultimately align with business goals.
- Consider embedding a leader with specific AI expertise. This individual – senior- or even C-level – can help drive innovation, ensure transparency, and ultimately help optimize security workflows. They can report directly to the CISO or CTO. Or, at the very least, organizations should identify an internal owner for these AI processes. For instance, this person – a new hire or someone with institutional knowledge – can help identify risk posed by newer entrants into the AI market. They can advise caution, making the CISO and C-suite aware that these vendors will likely not have robust security baked in.
A Commitment to Success
With greater company-wide visibility and advocacy – and a dose of pragmatism – security teams can be set up for success. Used properly, AI allows weary teams to reapply time savings to other critical functions, enhancing job satisfaction.
While it may appear like a tech solution to a tech-driven issue, responsible AI use can reduce the relentless demands on cyber-teams and make their professional lives a bit more sustainable and enjoyable.
The trick, of course, is giving the team the tools and support they need to deal with the AI onslaught properly – or see attrition rates rise even higher.