AI-Powered Tax Scams Surge Amid Growing Sophistication of Cyber Threats

April 11, 2025
Experts warn that artificial intelligence (AI) is fueling a new wave of tax-themed cyberattacks, urging security professionals to adapt defenses against increasingly deceptive and technically advanced threats targeting businesses and trusted platforms.

As the 2025 tax season reaches its peak, security professionals are witnessing a sharp rise in artificial intelligence (AI)-driven cyberattacks that are exploiting the stress, urgency and information sensitivity surrounding Tax Day. According to security experts across multiple sectors, threat actors are weaponizing generative AI, deepfake technologies and advanced phishing tactics to target businesses and organizations — not just consumers.

Devin Ertel, CISO of Menlo Security, underscores the psychological pressure of tax season, which cybercriminals exploit to increase their success rate. “Cybercriminals are fully aware of the stress and anxiety that surrounds tax season, and every year they take full advantage,” Ertel explains.

Ertel notes that this year attacks are not only more frequent but also more deceptive, with criminals impersonating GenAI platforms to lure users into divulging sensitive financial data. Menlo researchers have already tracked over 600 incidents of GenAI fraud in 2024 alone.

These AI-driven scams span a broad array of tactics, including impersonation of tax professionals and IRS officials through email, websites, and now, even video and voice messages generated using deepfake technology.

AI-Generated Deception at Scale

Casey Ellis, founder of Bugcrowd, states that generative AI is enabling “hyper-personalized scams,” making it increasingly difficult to differentiate between legitimate and fraudulent messages. “Generative AI and deepfake technologies are being weaponized to create highly convincing phishing emails, voice calls and even video messages that impersonate trusted entities like the IRS or tax preparers,” he says.

One emerging tactic is the use of AI-generated voice phishing, or vishing, where scammers use deepfake audio to convincingly mimic tax professionals or government officials. Ellis warns that even seasoned professionals can be duped by the authenticity of these calls, emphasizing the need for independent verification and behavioral analysis tools.

Today’s AI-powered tax scams extend far beyond traditional email phishing. Chad Cragle, CISO at Deepwatch, explains attackers are reviving dormant but once-trusted domains to bypass security filters, engaging in typosquatting with domain names that closely resemble those of reputable tax services, and leveraging SEO poisoning to drive traffic to counterfeit websites.

“Scammers are impersonating tax preparers to trick victims into providing sensitive financial details,” Cragle says. “They’re even using malware-laden tax documents shared through cloud platforms like Google Drive and OneDrive, and building trust over LinkedIn before launching attacks.”

According to Cragle, these tactics are increasingly multi-stage and exploit both technical vulnerabilities and human trust to infiltrate business systems.

While some tax-related scams are consumer-facing, security professionals within organizations must take steps to protect their infrastructure and personnel. J Stephen Kowski, field CTO at SlashNext, notes that cybercriminals are abusing trusted cloud platforms and notification systems to deliver malicious links, making traditional detection methods less effective. “AI tools are making it easier for scammers to create convincing impersonations that bypass traditional security measures,” he says.

Kowski advises organizations to implement layered security approaches, including independent validation protocols, behavioral content analysis and live scanning technology. He also cautions against reacting to messages that create urgency — often a red flag of deception.

The AI Supply Chain Risk

The growing use of GenAI tools in enterprise environments introduces additional exposure. Satyam Sinha, CEO of Acuvity, emphasizes that organizations must account for the risks associated with uploading sensitive documents to AI tools, especially when used on corporate networks.

“Everyone should be aware of the risks involved with sharing the content, especially on work devices,” he says, citing potential data leakage, model training risks and jurisdictional issues.

Sinha calls for a “ground up security mindset” as organizations increasingly integrate AI into daily operations. He advocates for policies that govern usage and visibility into how data is processed and stored by AI platforms.

Strengthening Defenses Against Credential Abuse

Credential stuffing also remains a persistent threat. Patrick Tiquet, vice president of security & architecture at Keeper Security, warns that attackers continue to exploit reused passwords from historical breaches to infiltrate platforms containing tax-related data. He recommends businesses enforce strong, unique credentials and multi-factor authentication while applying least-privilege access models internally.

Tiquet also flags deepfake videos and AI-generated content as tools used to impersonate tax advisors and solicit confidential information. “Look for subtle mismatches in tone, unnatural speech patterns or slight inconsistencies in the video,” he says.

The convergence of AI and social engineering has accelerated the evolution of cyber threats during tax season, but experts agree these tactics are here to stay — far beyond April 15.

“AI-driven phishing, SEO poisoning and multi-stage malware will continue evolving, fueling financial fraud and social engineering year-round,” Cragle warns.

About the Author

Rodney Bosch | Editor-in-Chief/SecurityInfoWatch.com

Rodney Bosch is the Editor-in-Chief of SecurityInfoWatch.com. He has covered the security industry since 2006 for several major security publications. Reach him at [email protected].