AI could revolutionize cybersecurity, but do the benefits outweigh the fears?

July 7, 2023
As the sophistication of artificial intelligence grows to combat complex cyber-threats, so too do the concerns of those who believe this rapid advancement might come at the cost to their own privacy and personal data.

Security threats have become increasingly frequent and vastly more sophisticated in recent years. To counter this trend, artificial intelligence has seen rapid advancement to an unprecedented degree.

However, as AI continues to rise to meet these dangers, so too do the concerns of those keeping a close eye on its development.

The advent of highly advanced tools like GPT-4 and ElevenLabs, along with the unrivaled speed with which they have progressed, has many experts on edge. With several AI tools capable of acting as virtual assistants, writing articles and even generating art, a major fear is that a large number of people will be phased out of the workforce in favor of a more cost-effective and less labor-intensive solution.

AI in the Workplace

“There are concerns about bots taking jobs,” says Matt Tengwall, General Manager of Fraud & Security Solutions at Verint Systems, who attended the recent Verint Engage 2023 event in Las Vegas. “It is my hope that bots augment the workforce, as intended, rather than eliminate any jobs. AI should help professionals work better and faster without changing the way they do things.”

This workforce augmentation comes in the form of bots designed to help security professionals more accurately and swiftly identify problems in several ways.

A virtual AI assistant can, for example, generate an entire data forecast in moments to provide valuable insight into specific trends. Or a team of bots can train one another using LLMs, or large language models, instead of relying on their programmers to determine blind spots and constantly update their code.

With these benefits, however, come concerns that the country might become too reliant on AI technology to fill in the gaps. With AI-based programs taking care of virtual assistantship, training and even video analytics, it is a possibility that an organization’s over-reliance on these programs might inhibit proper employee training on crisis response or leave security teams floundering during a breach. Organizations must strike a balance between delegating tasks to their AI programs and ensuring that their employees remain up to date on procedures, planning and response.

“We want our bots and employees to have synergy – to work better together than they would by themselves,” Tengwall adds. “It’s not about complete replacement.”

The Relationship Between AI and Security

When an organization’s video surveillance data is safely stored in the cloud, sophisticated AI can sift through countless hours of footage to identify threats and threatening actors, saving an immense amount of time and manpower.

Additionally, the insertion of AI into real-time monitoring systems to distinguish between real security breaches and false positives allows law enforcement to react to actual crises in a timely manner. As a result, security risks can be mitigated more quickly and efficiently.

Many AI-based platforms are now also capable of real-time facial recognition. By simultaneously searching through face databases during an event, they can identify specific problematic individuals remaining on the premises. This allows for increased scalability and accuracy in identifying threatening actors.

This is a powerful benefit to have in crisis situations, during which the swiftness of response is crucial to saving lives and property, say advocates of AI. It’s also potentially a time saver for law enforcement to focus on combating real threats and improving police response times, which have increased significantly in recent years.

The detectability of false alarms also allows law enforcement a greater window to apprehend threatening actors, as most criminals only stay on scene for a few minutes during and after a crime has been committed. There are a growing number of observers, however, who believe that AI will simply increase these vulnerabilities.

Malicious attackers have previously manipulated AI-based malware, used LLMs to write malicious code, and utilized tools like ChatGPT to draft phishing emails, among other attack vectors. The sheer amount of data that AI can process without any human oversight also makes it vulnerable to breaches or leaks, whether they are intentional or not.

Others have noted that using AI-based software without adequate oversight may cause the programs to develop biases toward specific sets of data or even individual traits, resulting in AI software unintentionally discriminating against people with certain physical traits or focusing too heavily on a noticed trend.

To combat this, organizations can foster a transparent relationship between their AI programs and security professionals, preventing programs from making decisions on their own with no oversight and encouraging their monitoring and auditing.

Additionally, AI can be trained using LLMs to better respond to various cybersecurity threats, including automatically isolating malware-infected items, instantly identifying a suspicious file, login, or IP address, or detecting threats from inside.

Is AI Good for Privacy?

As increasing numbers of AI-based software and algorithms collect and process data, a looming fear is that of compromised personal privacy.

Consumers are used to seeing their personal preferences reflected in online advertising, with many organizations using customer data to target ads to their specific interests. This is a reality that tends to make customers uncomfortable, as many do not believe they consented to their data being collected and used in this manner.

Others are worried about trusting major institutions that use these technologies. Data breaches in banks are now commonplace and the exposure of personal data can lead to massive privacy security consequences.

Identity theft, hacking of personal accounts, and financial fraud are all risks that customers take when trusting these institutions with their personal data, and they will be much less likely to do so if the organization has a history of breaches.

“Security and feelings of personal privacy can have a major impact on customer engagement with an organization,” says Tengwall. “If a customer doesn’t believe that their data is safe, there’s no trust continuing forward.”

Organizations can take steps to combat this apprehension by establishing transparency in their programs. By letting the consumer know what exactly they are doing with the collected data and how it’s being protected from malicious actors can help these organizations to retain the trust of those who put faith in them to keep their information safe.

Artificial intelligence can be a powerful security tool, if used correctly. By remaining transparent and allowing AI to augment – but not replace – their security professionals and procedures, organizations can utilize their AI-based software to its fullest extent without sacrificing the privacy and trust of their customers.

About the Author

Samantha Schober | Associate Editor

Samantha Schober is associate editor of SecurityInfoWatch.com.