In February 2016, hacktivist group Anonymous published a YouTube video with information about twenty different Denial of Service (DDoS) attack tools that were available to the public. And thus, DDoS attacks were democratized. Anyone with a grievance and an internet connection could successfully launch an attack without a barrier to entry.
As if the accessibility of these tools weren’t easy enough, next came DDoS attack services, with tiered pricing plans, SLAs, and user-friendly interfaces.
What has happened since? The size, frequency, and complexity of DDoS attacks have exploded. According to Netscout Systems data, there was an 807% increase in DDoS attacks from 2016 to 2022.
Enter the Era of Generative Pre-Trained Transformers (GPT)
Fast forward, and we could be at a similar point in the industrialization of cybercrime.
As we saw in DDoS attacks almost a decade ago, attack services are emerging, making it simple to launch sophisticated attacks at scale. In 2023, there were more than 13 million distinct attacks, an average of ~36,000 per day. Malicious actors began exploiting the capabilities of Generative Pre-Trained Transformers (GPTs) to create and utilize threat tools.
GPTs, a subset of Artificial Intelligence (AI), are creating new problems for security practitioners and a battle against evolving and surging phishing threats.
What are they? Generative pre-trained transformers are a kind of large language model (LLM) and a prominent framework for generative AI. Drilling down further, GPTs are artificial neural networks based on transformer architecture used in natural language processing tasks. The purpose of GPTs is to facilitate machine and language translation, such as accurately translating English to French.
Threat actors are becoming more technically adept, leading to the use of GPTs to create human-like text, images, music, and more – and as a result, GPT-powered malicious tools for nefarious reasons. Malicious GPT tools include Fraud GPT (a subscription-based platform that generates malicious content for fraudulent purposes), Hacker GPT (a tool that operates similarly to ChatGPT, serving as an AI assistant tailored specifically for hackers), Worm GPT (a rogue version of ChatGPT that lacks crucial guardrails and ethical guidelines), and Deep Voice Fakers (a tool used by malicious actors to deceive individuals over the phone).
Ethical and Security Challenges
The evolution of AI-powered fraud and GPT-powered attacks presents ethical and security challenges. The first challenge is the fact that AI has lowered the barrier of entry for amateur cyber criminals. The second is the ability to fully automate attacks at a much larger scale than ever before. And the third is the ability for attackers to train models on their own malware. An example of this training is deepfake voice scams. Earlier this year, deepfakes of children’s voices were generated from TikTok videos and sent to parents pretending to be the child. The scammers then asked for ransom payments.
In reality, GPTs allow attacks to happen with greater realism, precision, and scale.
Here are five examples:
1) Phishing: Sophisticated and realistic ‘phishing’ emails or messages mimic legitimate communications and make it more difficult for recipients to identify malicious intent, especially from business-related contacts like executives, vendors, or colleagues. Phishing is a multi-channel problem. Web filtering & email gateways alone cannot solve it. Guarding against phishing requires a holistic, comprehensive approach that ensures zero blind spots.
2) Fake Content Generation: This is when hackers produce large volumes of deceptive content, such as fake news, reviews, videos or social media posts, to manipulate public opinion or defraud individuals. The Anti-Phishing Working Group (APWG) reported that attacks from social media platforms accounted for 42.8% of all phishing attacks in the last quarter of 2023.
3) Social Engineering: A manipulation technique where attackers craft believable narratives or dialogues in real-time, to manipulate victims over text-based communication platforms. The method exploits human error to lure unsuspecting users into exposing data, spreading malware infections, or giving access to restricted systems. 81% of reporting businesses have seen increased phishing attacks in the past year. Phishing will remain the top social engineering threat to businesses throughout 2024, surpassing threats like business email compromise, vishing, smishing, or baiting (LastPass 2024 Survey).
4) Code Generation for Malware: This involves the creation or modification of malware without deep technical knowledge from the attacker, and quick disbursement or sale through Dark Web avenues. Criminals can create new variants of malware quickly and automatically. The result is the ability to launch attacks with unique characteristics but similar functionality.
5) Automating Attack Processes: Attack automation, such as identifying vulnerabilities or generating scripts for exploitation, is a significant challenge for security operations teams. An automated attack utilizes scripts and programs to exploit vulnerabilities and gain access to a computer system without the user's knowledge or permission.
GPT Fraud Mitigation Strategies Have Never Been More Important
In the new age of GPT-powered fraud, organizations must be aware of new trends and ready to mitigate the attacks. The question is, “How do we learn about scams as quickly as possible to train models to catch up and keep up?” so that organizations can be aware of early warning signs.
Security teams must remember that the quality and diversity of the training data used by AI systems are critical, and any biases in this data can cause the AI to produce unreliable findings. Automated systems can also produce false positives or false negatives, resulting in either additional inconvenience or missing fraud.
Large Language Models can be used to fight GPT-powered fraud. Mitigation strategies include:
- Continuous training that allows the AI-model to respond to developing fraud patterns.
- Using a multi-layered approach when it comes to complex AI-powered scams that combine enhanced detection and prevention.
- While an LLM-based threat detection system is valuable, it is also important to recognize good and bad LLMs. LLMs such as BERT and GPT can evaluate vast amounts of natural language data and detect trends and abnormalities that could signal a security risk.
GPT-attack mitigation at the technology implementation level is of course critical. However, collaboration amongst the cybersecurity community to unite and proactively address and defeat GPT-generated cyber threats should also be a critical focus. The ever-changing strategies of fraud and cybercriminals is adaptive and a universal problem.