Introduction

Artificial intelligence is no longer a buzzword reserved for Silicon Valley pitch decks. Over the past year, it’s been quietly and sometimes dramatically transforming the criminal underground. From deepfake CEOs ordering million-dollar transfers to scam call centers powered by synthetic voices, generative AI (genAI) is being tested, adopted, and in some cases operationalized by threat actors at scale.
But beneath the headlines and breathless predictions lies a more complex reality. Our team spent months monitoring closed forums, scammer chats, and active campaigns to answer a simple question: Is genAI truly changing the game for cybercriminals?
What we found is both reassuring and concerning. Fully autonomous AI-driven cybercrime isn’t here yet. But hybrid human AI operations are already reshaping how scams are run, phishing is crafted, and malicious campaigns are managed.

Key discoveries

  • Impersonation fraud is enabling multimillion‑dollar executive and financial fraud, making traditional trust‑based verification increasingly unreliable.
  • Live deepfakes is emerging as a niche, with underground markets recruiting “AI video actors” and selling turnkey deepfake solutions for fraudulent operations.
  • AI‑powered scam call centers are operational today, combining synthetic voices, inbound AI responders, and LLM‑driven coaching but still relying on human scammers for critical stages.
  • Fine-tuned self‑hosted “Dark LLMs” are proliferating, offering uncensored, crime‑optimized AI models that produce phishing kits, scam scripts, and malicious code without safeguards.
  • AI‑powered spam tools generate hyper‑personalized phishing at scale, bypassing filters, improving deliverability, and accelerating campaign turnover.
  • Malicious toolkits increasingly embed AI for reconnaissance, exploitation, and code obfuscation – lowering barriers for less‑skilled attackers but stopping short of full autonomy.
  • AI drastically increases the speed, scale, and personalization of cybercrime, compressing defenders’ detection and response windows.
  • Hybrid human‑AI operations are already reshaping the threat landscape, even though fully autonomous AI‑driven cybercrime has yet to appear in the wild.
  • Defender visibility into AI‑driven crime is still limited, as much of the tooling is self‑hosted, closed‑access, and designed to evade tracking.

Who may find this blog interesting:

  • Cybersecurity analysts and corporate security teams
  • Cybersecurity C-suite
  • Threat intelligence specialists
  • Cyber investigators
  • Computer Emergency Response Teams (CERT)
  • Law enforcement

Key AI-Driven use-cases employed by cybercriminals

Our research identified five major areas where AI is actively weaponized by cybercriminals today. These use cases are based on real-world attacks, live tools, and underground market offerings, highlighting how AI is changing the threat landscape.

1. Impersonation Fraud – live deepfakes on the rise

Among all AI-driven threat vectors, deception stands out as the most advanced and immediately impactful. Cybercriminals have elevated social engineering attacks to unprecedented levels of realism and scale, with AI playing a crucial role. Only in Q2 of 2025, damages from deepfake incidents reached $350 mln. according to a report by Resemble.ai.

Main attack vectors include:

  • Executive Impersonation: Live video deepfakes impersonate CEOs, CFOs, or trusted clients during real-time calls, pressuring victims into wire transfers or data disclosures, often combined with Business Email Compromise (BEC).
  • Romance and Investment Scams: AI-generated fake personas build trust and manipulate victims over extended periods.
  • KYC Bypass: Deepfake videos and images trick identity verification processes during account creation or financial onboarding.

Deepfake and lip-sync services have been around for several years – from fake Elon Musk crypto pitches to politicians explaining fake tax laws. Although now significantly evolved from just discussing deepfake methods and advising on the most effective public faceswap and lip-sync services on underground forums 3 years ago, to emerging of an entire cybercrime niche with sellers offering deepfake video creation priced from $5 to $50 depending on the quality and complexity of the task.

Figure 1. A screenshot of a deepfake services (original and translated into English)

Figure 1. A screenshot of a deepfake services (original and translated into English)

And the evolution never stops. The most concerning development is live deepfakes – real-time video streams simulating legitimate individuals speaking and reacting naturally in meetings. Victims believe they are on authentic calls with superiors and transfer large sums of money accordingly. Over the past two years, multiple high-profile cases involving millions in fraudulent transfers have surfaced, such as Arup’s CFO case, where a single deepfake Zoom meeting led to losses of $25 million.

This shift challenges defenders deeply. Classical single-layered fraud detection strategies struggle against sophisticated deepfake scenarios and real-time manipulation, so do people, especially when trust is visually and audibly established in live settings. The detection approach should include multi-layered, AI-enhanced anti-fraud solutions that utilize advanced device, application, and behavioral monitoring, alongside device fingerprinting and anomaly detection

What’s driving this surge?

Two notable trends emerged in our research:

Criminal specialization in Impersonation-as-a-Service: along with vendors providing Deepfake-as-a-service, underground forums show rising recruitment for “AI video actors,” “deepfake presenters,” and “virtual call agents,” highlighting increasing demand in scammers for deepfake fraud and a high level of organization.

Figure 2. Ads in Telegram looking for “AI agents” attributed to scam centers.

Figure 2. Ads in Telegram looking for “AI agents” attributed to scam centers.

Dedicated deepfake SaaS offering: organized cybercriminal groups no longer rely solely on public face-swapping tools. Instead, they procure specialized, purpose-built solutions optimized for fraud, offering high-quality, low-latency real-time face-swapping without ethical or usage restrictions. These vendors typically operate in closed Chinese-speaking Telegram channels, supplying fraud networks primarily across Southeast Asia.

Figure 3. Demonstration of a DeepFake solution advertised specifically for live deepfake scams

Together with technological, socio-economic transformation and some other factors, these trends have transformed live deepfakes from curiosities into scalable, viable services, severely challenging existing methods of establishing trust on the technological and human level.

2. AI-Assisted Scam Call Centers

Traditionally reliant on human labor, scam call centers are now integrating generative AI to scale and optimize their operations. Criminal developers offer AI-powered call center platforms (including SIP, scripts, and management systems) designed explicitly for fraud.

Figure 4. An example of an AI-assisted call center setup offer on an underground forum

Figure 4. An example of an AI-assisted call center setup offer on an underground forum

At present, AI mostly supports two core functions:

  • Text-to-Speech (TTS) and synthetic voices to deliver prerecorded scam messages or initiate contact.
  • Inbound AI voice responders that handle initial victim queries before escalating to human scammers.

A notable example is an underground vendor offering a SIP-based call center platform integrated with a custom large language model (LLM). This system analyzes live scam conversations and suggests persuasive replies to operators in real time, provides voice modulation capabilities, and tracks operator performance metrics.

Figure 5. Advertisement of an “All-in-one” scam call center instrument including LLM-guided responses and conversations analysis.

Figure 5. Advertisement of an “All-in-one” scam call center instrument including LLM-guided responses and conversations analysis.

Rather than fully replacing scammers, AI acts as a real-time coach, optimizing social engineering tactics and enabling more organized, data-driven fraud operations.

While full AI agent automation isn’t widespread yet, these hybrid setups raise the operational bar significantly, making detection and disruption more challenging, although not fully replacing humans on the other end.

3. Unrestricted GenAI Chatbots – the rise of self-hosted Dark LLMs

As large language models became widely available on mainstream platforms, the criminal underground responded swiftly with their own uncensored versions, known as Dark LLMs.

Early experiments like WormGPT were rudimentary and short-lived. Today, however, Dark LLMs are more stable, capable, and commercialized. They assist in various cybercriminal activities, including:

  • Fraud and scam content generation for romance, investment, or impersonation scams.
  • Crafting phishing kits, fake websites, and social engineering scripts.
  • Malware and exploit development support, including code snippets and obfuscation.
  • Initial access assistance with vulnerability reconnaissance and exploit chains.

At least three active vendors offer Dark LLMs via self-hosted platforms, some boasting models exceeding 80 billion parameters, with subscriptions ranging from $30 to $200 per month depending on features and API access. Their customer base likely exceeds 1,000 users.

Figure 6. DarkLLMs interface screenshots.

Figure 6. Dark LLMs interface screenshots.

Key trends in Dark LLMs ecosystem include:

  • A preference for self-hosting over public API use, limiting exposure and tracking.
  • Jailbreaked models with zero ethical safeguards, providing unrestricted assistance for illicit activities.
  • Use-case-specific fine-tuning on datasets focused on penetration testing, scam linguistics, or malicious scripting.
  • Flexible “as-a-service” monetization mirroring legitimate SaaS models.
  • AI chatbot interfaces in messengers (Telegram or Discord bots).
  • Developer-centric features like private APIs, embedding AI into malware builders or phishing toolkits.

Dark LLMs are becoming embedded infrastructure for cybercriminals, lowering barriers, streamlining operations, and expanding reach for low-skill actors. Their growing sophistication might not yet demand completely new defensive strategies although it definitely raises new questions about detecting, tracking, attributing and disrupting activities involving AI-generated malicious content and LLM-assisted workflows.

4. AI-powered mailers and spam tools: smarter, scalable social engineering

Generative AI integration into traditional spam infrastructure marks a significant shift in phishing, scam, and malware delivery campaigns.
Underground developers actively offer tools embedding genAI functionality into spam and bulk mailers, designed to:

  • Personalize phishing lures at scale using scraped or inputted target data.
  • Bypass content-based security filters by dynamically generating human-like language.
  • Automate message variation, reducing duplication and increasing deliverability.

One major project is advancing toward a fully autonomous spam agent capable of managing end-to-end workflows, including generating personalized messages, managing mailing lists, monitoring open/click rates, and adjusting tactics dynamically.

Figure 7. AI-powered spam mailer interface with ads from the developer describing the AI features

Figure 7. AI-powered spam mailer interface with ads from the developer describing the AI features

Though still in early stages, such agentic spam tools foreshadow fully automated social engineering operations with minimal human oversight.

Impact on the threat landscape:

  • AI-crafted emails are harder to flag and more convincing.
  • Greater personalization adapts context and style relevant to victims.
  • Lower barriers allow attackers without linguistic or technical expertise to launch campaigns.
  • Faster lure generation and distribution accelerates campaign turnover.
  • Enables scalable fraud with less manual effort.

Defenders must prepare for more convincing phishing reaching inboxes at unprecedented volume and speed. Users – for way more personalized and harder-to-recognize lures.

5. AI-enhanced criminal tooling – integration without autonomy (yet)

AI is increasingly embedded within malicious tool development. Beyond using chatbots for one-off code generation, cybercriminals now integrate AI functionality directly into toolkits, often abusing public chatbot APIs.

These integrations support:

  • Initial access tasks like reconnaissance and vulnerability scanning.
  • Exploitation automation and evasion techniques.
  • Tactical assistance, such as code generation, obfuscation, and social engineering script creation.
Figure 8. Advertisement of an cryptocurrency wallet seed phrase parser (stealer) with integrated AI functionality (used for image recognition)

Figure 8. Advertisement of an cryptocurrency wallet seed phrase parser (stealer) with integrated AI functionality (used for image recognition)

AI improves usability and lowers technical barriers, but remains an enhancement layer rather than a transformation of offensive tooling. As agentic AI matures, defenders should anticipate tools capable of real-time adaptation. No cases of fully autonomous “agentized” malware operating independently with AI are observed in the wild, although there have been reported cases where threat actors embed AI capabilities (LLM capabilities through public AI API services) into malware for better reconnaissance and command execution.

Where AI hits hard and where it doesn’t (yet)

AI’s impact varies dramatically across cybercrime domains:

  • Fraud benefits most, with generative AI enabling scalable, persuasive lures in impersonation and romance scams. However, these operations still rely heavily on human labor, often involving call centers in regions such as Southeast Asia and Africa where workers may be subject to exploitative conditions.
  • Malware development has gained from AI-assisted code generation and obfuscation, lowering entry barriers. Yet, autonomous malware remains a concept.
  • Phishing has improved via AI personalization and spam automation, but still depends on human execution during critical engagement stages.

How AI adaptation affects defenders now

Key defensive challenges include:

  • Malicious campaigns’ speed and scale are significantly increasing – AI allows criminals to launch convincing campaigns faster and at a volume that challenges traditional detection and response cycles.
  • Social engineering is becoming (even) harder to spot – live deepfakes, synthetic voices, and hyper‑personalized lures are eroding the effectiveness of user-dependent verifications and security measures.
  • Threat landscapes are shifting rapidly – AI‑enabled tools in the criminal underground emerge frequently, requiring continuous intelligence collection and analysis to stay up to date with what is being used and how by cybercriminals today.
  • Fraud detection faces new evasion tactics – AI enables criminals to mimic legitimate behaviour patterns, making fraud detection thresholds less reliable.
  • Operational security needs real‑time adaptation – Defenders must think about automated anomaly detection, behavioural analytics, and faster intelligence sharing to keep pace.
  • Attribution is getting more complicated. AI-generated tools/content or AI-assisted operations make threat actor and/or campaign attribution more complicated. The more and deeper genAI adoption goes into cybercrime – the harder it will be.

What may come next in AI-driven cybercrime

We are in AI’s early innings within cybercrime, but the trajectory is clear, pointing toward automation and sophistication. Likely near-future developments include:

  • Semi-autonomous malicious AI agents orchestrating attacks end-to-end.
  • AI-powered adaptive malware evolving tactics in real time.
  • Seamless voice-to-voice impersonation scam capabilities.

AI adoption in cybercrime is shifting the balance – not instantly, but persistently. Criminals leverage AI not as magic bullets but as powerful force multipliers, improving human performance and lowering skill requirements. The next frontier is hybrid intelligence combining human creativity with machine efficiency.

Security teams must anticipate a future where attackers operate AI-augmented campaigns at scale, continuously evolving and evading. Staying ahead requires blending technology, threat intelligence, and proactive defense with a new mindset: humans plus AI versus humans plus AI.

Group-IB Threat Intelligence Portal: AI in cybercrime

Group-IB customers can access our Threat Intelligence and Fraud Protection portals for more information about AI in cybercrime. The latest reports include:

Recommendations

Even being in early stages, the adoption of generative AI by cybercriminals is no longer speculative – it’s already altering the threat landscape. Security leaders should consider the following priorities:

  1. Move beyond single-layered defenses. AI-powered phishing and live deepfakes can bypass  detection based on static patterns. Layered defenses that combine behavioural analytics, biometric validation, session analysis and anomaly detection will be essential. Group-IB Fraud Protection uses explainable AI with behavioral analytics and advanced device monitoring to detect anomalies and sophisticated AI-driven fraud.
  2. Expand visibility into underground ecosystems. The emergence of Dark LLMs and crime-optimized AI models demonstrates how quickly tooling evolves in closed communities. Group-IB Threat Intelligence provides ongoing monitoring of these spaces and early warning of new techniques and services.
  3. Reframe awareness training. Employees can no longer be taught to “look for typos.” Training should emphasize context-based red flags, escalation pathways, and skepticism towards requests that rely heavily on urgency and authority.
  4. Adopt AI defensively. Just as attackers use AI to accelerate campaigns, defenders should apply it to compress detection and response windows and identify abnormal behavior at scale.
  5. Strengthen investigation and response capabilities. Hybrid human-AI attacks will challenge attribution and incident response. Teams need the ability to combine forensic expertise with automated intelligence to understand, contain, and recover from AI-driven threats.

Group-IB supports this shift by providing Threat Intelligence, Digital Risk Protection, Fraud Protection, Digital Forensics & Incident Response, and High-Tech Crime Investigations that illuminate the AI-driven underground and help organizations prepare for emerging attacker workflows.