We’ve all experienced how fast technology services like social media, streaming, and cloud platforms can take off when they strike a chord with consumers or businesses. The latest technology to see a surge in popularity and adoption is artificial intelligence (AI), specifically the subset of generative AI.
Although generative AI is by no means new to the scene, the release of tools like ChatGPT and DALL-E have caused a significant uptick in usage and media attention over the past few years. The volume of new AI tools and services and the sheer speed at which they’re coming online is staggering, with no slowdown in sight.
Users and businesses are quickly finding ways to incorporate AI into their everyday lives and their work. Yet with this rapid expansion comes risks, especially to businesses in the form of data loss or misuse. The question becomes, how can organizations embrace these innovative tools without sacrificing security?
Today’s Security Backdrop
The use of AI in the workplace, especially “Shadow AI” services that haven’t yet been sanctioned by IT departments – or even made it on the radar of security teams – is even more difficult when contextualized by today’s cybersecurity landscape.
First, generative AI is evolving so rapidly that innovative security measures are needed to keep up the pace. Additionally, more employees are working in hybrid or remote locations and logging into any number of devices to access business collaboration environments, like the cloud.
This vastly expands the playing field, or attack surface, upon which employees can commit errors leading to data loss, carry out insider threats, or fall victim to an opportunistic hacker. After all, humans remain the weakest link in cybersecurity since it’s difficult for security teams to predict or control every action or behavior, whether intentional or accidental, that might lead to a breach.
Shadow AI Blind Spots
Just as the unauthorized use of hardware or software by employees within organizations (also known as Shadow IT) remains a glaring blind spot for security leaders across all industries, the unauthorized use of AI creates similar issues. Specifically, there’s considerable concern about how trendy tools like ChatGPT, Google Bard, and others might threaten data security.
For example, employees using the tools for benign purposes like boosting efficiency, assisting with research, or generating thought-starter content could inadvertently expose sensitive data. While AI can be put to nefarious use by employees, data breaches or violations more often stem from employees’ lack of awareness of the data transactions with these services, especially those that are unknown or unsanctioned by an organization.
So how are organizations reacting to these challenges? Businesses in certain industries, especially those that handle more confidential or sensitive data, may decide that AI of any kind isn’t appropriate for the nature of their work. Others may be more comfortable diving headfirst into AI to take full advantage of its capabilities and unlock efficiencies.
Whichever the case, the fundamental truth is that AI isn’t going anywhere, so all organizations, regardless of their risk tolerance for AI in the workplace, need to factor these services into their data security programs.
Minimizing risks strategically
With the volume of new AI tools being released, attempting to create policies for every individual application that arises is somewhat futile. Not only is it unlikely that security or IT teams will be aware of every new AI platform that employees may be experimenting with, but this strategy is akin to playing “whack-a-mole” – as soon as you lock down one application, a new one will pop up.
For this reason, organizations should establish broader policies and focus on enhancing their activity control capabilities. This way, even if unvetted or risky AI services are being used in the shadows by employees, there are strong data protections in place to mitigate threats regardless.
While some organizations will be even more conservative and choose to block all AI outright, creating guardrails and compensating security controls often yields the best result for all parties involved.
First, security teams can coach employees on how to prevent misuse of these tools or data leakage. Second, they can implement technologies like Data Loss Prevention (DLP), User and Entity Behavior Analytics (UEBA), and so many others.
In today’s cybersecurity environment, data lives everywhere and is increasingly challenging to protect. Generative AI tools, while transforming the way we work, are also adding to this security burden. The organizations that invest in stronger data protection capabilities now – giving them greater visibility and control of their data across the web, cloud, and devices – will be much better positioned to embrace AI than those who don’t.
By spending the time to create policies and controls to minimize risks from any unsanctioned service, instead of on a case-by-case basis, organizations will have more peace of mind that their data is safe both now and in the future.
Rodman Ramezanian, Global Cloud Threat Lead at Skyhigh Security
Rodman Ramezanian, has more than 11 years of extensive cybersecurity experience, specializing in Adversarial Threat Intelligence, Cyber Crime, Data Protection, and Cloud Security. He is an Australian Signals Directorate (ASD)-endorsed IRAP Assessor – currently holding CISSP, CCSP, CISA, CDPSE, Microsoft Azure, and MITRE ATT&CK CTI certifications.