Introduction
The speed, nature, and intent of cybercrime have been evolving faster than we can keep up with. With the use of AI, we’ve all been anticipating it, but the extent has been underestimated. The cybersecurity landscape is becoming hyperactive – AI, evolving adversary ambitions, geopolitical shifts, and changing business dynamics, all combine to play a role in this acceleration.
At Group-IB, our cybersecurity experts, especially CEO Dmitry Volkov, make strategic cyber forecasts based on annual proprietary research, data analytics, insights from industry-leading events, and collaboration with veterans. In the cybersecurity forecast for 2026 and beyond, Dmitry presents a vision of the future, one dictated by autonomous attacks, AI-powered malware strains, agentic extortion, AITM, and AI-driven exploitation of crypto and stablecoin security gaps, among other threats.
Leaders will not only have to adapt to this new reality but also succeed in the wake of it. Here is when the need for collaborative and consistent security measures will be reinforced within the industry.
Want to know the key cyber threats shaping the year ahead and beyond? Get the insights directly from Group-IB’s CEO, Dmitry Volkov, and stay defense-ready in the fight against next-generation cybercrime.
1. The silent spread: AI-driven worm epidemic
Malware infections, over the years, have taken new and persistent forms, forcing defenders to continuously adapt their security systems. However, until recently, most malware relied on manual execution, but the reality is slowly changing.
With the integration of AI, a concerning category of self-propagating malware is fast emerging. These future variants may emulate worm-like behavior — having a distinct spreading behaviour, and potentially turning every compromised device into an infection spreader.
Traditional self-propagating malware has existed for a while. Incidents like WannaCry (which exploited a Windows vulnerability and caused a large-scale outbreak in just hours), NotPetya (malware acted as a dual means for ransomware and cyberwarfare), and Mirai (exploited vulnerable IoTs to initiate DDoS attacks) caused billions of dollars in damage within days because threat actors were able to automate steps needed for rapid propagation.
In 2026, the shift for the worse is certain. It can be predicted that with AI’s power, these malware strains will spread faster, become adaptive to select targets, exploit targeted weaknesses, and evade detection better. Autonomous AI agents will increasingly be capable of managing the entire kill chain: vulnerability discovery, exploitation, lateral movement, and orchestration at scale. Threat actors designing AI-driven self-propagating malware will potentially lead to the first truly AI-driven worm epidemic.
2. Agentic extortion: A new form of ransomware?
Not just technological, but a psychological means of disruption, we’ve seen ransomware move beyond mass exfiltration to targeted, single, double, and even triple extortion (when it becomes an inside job). Ransomware-as-a-Service (RaaS) endorses a whole dark ecosystem of tools, developers, affiliates, all working together as a structured business model aimed at maximizing impact, minimizing time and effort for disruption.
In 2026 and beyond, AI-led innovation will continue to evolve how we look at ransomware today. Ransomware groups will gain an additional boost as they begin adopting AI agents to accelerate their attacks once they gain a foothold in a victim’s network. These AI agents will likely become part of Ransomware-as-a-Service (RaaS) offerings; even low-skilled affiliates will be able to access advanced automated capabilities.
Even today, we see significant investment in automation: rapid encryption of servers and virtual environments, automatic destruction of backups, and streamlined lateral movement, disabling security solutions like Extended Detection and Response (EDR). AI-driven agents will enhance this further by allowing ransomware operators to scale their operations and increase the speed and efficiency of attacks, reducing the time defenders have to react.
3. Adversary Artificial Intelligence-In-The-Middle attacks
We’re all operating in the digital world, where identity authentication is the foundational means of security. 2FA, MFAs, biometrics, passwordless systems, are all designed to validate the right user behind their online activity. But adversaries are actively lurking and hijacking your identity to cause and continue damage; one way of doing it is through Adversary-in-the-Middle (AiTM) attacks.
Adversary-in-the-Middle (AiTM) frameworks are becoming popular among cybercriminals. They don’t just steal user information to escalate attacks, but also exploit the continued verified access users give on any device, application, or platform. Although today these AiTM attacks require significant manual effort to manage compromised sessions, maintain persistence, and bypass authentication. In 2026, attackers will embed AI into these frameworks to automate session hijacking and credential harvesting at scale. This will render our verification systems ineffective as AI-managed AiTM operations will become more adaptive than current defenses.
4. Crypto and stablecoins: A billion-dollar business opportunity or vulnerability?
Call it financial modernization or a survival tactic, as more traditional banks now embrace crypto rails and stablecoins to keep up with the need for faster (instantly moving value on-chain between institutions or customers), uninterrupted, transparent, and reliable transactional exchanges, certain risks piggyback on this innovation.
Trillions of dollars are lost to fiat-and-crypto-based laundering activities, much of it moving undetected through global banking infrastructure. And the numbers will, unequivocally, go upward.
New and ingenious fraud schemes will be a downward consequence of banks tokenizing assets and embracing crypto ecosystems. This will stimulate fraudsters to invest even more in fraudulent activity and infrastructure, especially for automation, layering, and obfuscation. Things like DeFi systems, smart contract exploit kits, the use of AI bots for money laundering, and making fraud untraceable will be witnessed more commonly.
Apart from the identity and authentication fraud that already persists in the financial vertical, stablecoins and crypto will increasingly be used to power cybercrime economies in the coming years.
5. The API wild west
Modern businesses are breaking boundaries to unlock new levels of growth and efficiency. Key drivers of this “borderless” acceleration are cloud and API ecosystems. While the API by design is built for automation, its machine-controlled interfaces make it easy to scale and orchestrate operations at never-before convenience.
As these systems become machine-managed, efficiency is a by-product; but AI-driven threats also slip in, especially in cloud infrastructures.
In 2026 and beyond, AI-driven attacks will target the automated layer of such cloud environments. How? As the cloud is code-controlled (permissions, storage, network settings, policies, etc., are defined through APIs), it makes it machine-readable, meaning AI can understand and even modify cloud configurations through APIs.
Attackers are likely to exploit the automation logic to cause large-scale disruption, expose control surfaces, or tamper with configurations.
APIs will give both sides, attackers and defenders, the ability to scale operations limitlessly in the cloud. It all depends on who has the computing power and resource bandwidth to leverage it and tip the scales in their favour.
6. Rise in phone scams: Your fear is the power they have over you
Scammers don’t just rely on technology to manipulate users; they psychologically cave them in. The common assumption among digital users is that when “someone” asks them to perform a risky action, they won’t give in. Yet, in high-pressure situations, fear, urgency, or authority overrides logic.
These scams are only going to get more convincing, intrusive, and pressuring them to be complacent. Their awareness alone often doesn’t help.
In some countries, fraudsters no longer bother asking for card details, OTPs, or requesting small transactions, because these tactics have become less effective. Instead, their manipulation skills have evolved to the point where they can convince victims to take out loans or even sell valuable property such as cars, apartments, or houses. Unfortunately, this trend is likely to become more widespread over time, as simple, low-effort fraud becomes less profitable for criminals and they shift toward high-pressure, psychologically sophisticated schemes.
7. Secure-by-sovereignty, exposed-by-design
As regions embrace sovereignty, they’re adopting data localization measures (storing, sharing, and processing data within regional borders). The regionalization of data will inadvertently slow down global collaboration and threat intelligence sharing against wide-scale, coordinated attack schemes. Attackers are operating globally, but defenders are limiting their detection and response due to limited and regional visibility.
So, how do businesses and regions understand the global scale and intent of modern attack schemes when visibility is local? Group-IB, long aware of this challenge, has established its presence in fighting cybercrime through a neural network of Digital Crime Resistance Centers (DCRCs), enabling regions to thwart local attacks using the global knowledge base of active and emerging threats, TTPs, threat actor profiles, and underground operations.
Group-IB’s global defense model does not involve transferring sensitive customer data; instead, it leverages threat indicators to enrich intelligence and turn it into actionable insights against adversaries, all while respecting data localization and privacy laws.
8. Invisible backdoors in AI-assisted code
AI is transforming coding workflows. The quality of produced code and the confidence developers place in it continue to grow. As code generation improves, so does the trust developers place in these systems, leading many teams to forget rigorous review and allowing them to reach production environments more quickly.
This over-reliance has also heightened the risks of supply-chain attacks, where adversaries insert hard-to-detect backdoors into legitimate software and popular libraries used by developers. With the rise of AI coding tools, nation-state actors may attempt to influence or manipulate AI code-writing assistance to embed backdoors and vulnerabilities at scale.
The combination of widespread adoption and reduced scrutiny: how do adversaries look at it? A scalable opportunity to compromise systems and development pipelines, one that they are unlikely to ignore.
2026 trends on the defensive side…
9. Unified SOC against AI-driven cybercrime
Security Operations Centers (SOCs) are also investing in AI to detect and respond to AI-driven attacks at competitive speed. This shift means that internal collaboration to close security gaps must become agile, almost real-time as well.
Traditional SOCs operate on disparate tools that spew massive data, which is viewed as per an SOC’s responsibility hierarchy, often creating response silos. This needs to now solely shift towards collaborative responses through continuous visibility, intelligence-sharing, and coordinated action.
Security, IT, fraud, and risk teams will need to share threat intelligence in real-time across both cyber and fraud domains to keep pace with rapidly evolving, automated threats. The concept of collaborative SOCs is slowly evolving into Cyber Fraud Fusion Centers; these hybrid, cross-functional teams considerably improve risk management through shared data, dashboards for connected visibility and detection, centralized automated models to normalize and correlate data, and real-time feedback loops from teams to instantly act and harden defenses across all domains at the first signs of threats or infections.
In this way, security within these SOCs will replace older keywords with new ones: continuous, real-time, and collaborative. These operating models and inter-team functions can be the only strong response against emerging AI threats, which operate in minutes, not days or hours.
10. XAI: Bringing accountability in the emerging era of AI autonomy
We’re officially entering the era where AI dependence isn’t just for assistance; we’re giving it the power to decide and act. AI systems will increasingly make autonomous security decisions in the future, but the caveat of lacking oversight can become detrimental, leading to bias, misclassifications, privacy concerns, and ethical failures.
Therefore, Explainable AI (XAI) becomes optional to non-negotiable in the future for organizations planning serious security/AI deployments, not as an afterthought after systems are built, but through XAI in
- The data it’s trained on (to prevent poisoning or manipulation)
- Its decision logic (to prevent bias or adversarial exploitation),
- The output validation process (so analysts can see why the model acted the way it did).
And here’s the ground reality: despite businesses being eager to onboard AI capabilities, most are not ready to trust AI fully.
The blackbox problem, additional vulnerabilities, lack of transparency, compliance risks, etc, have made AI’s rapid adoption a major risk. As a result, organizations may begin demanding that vendors disable AI-driven features unless they are explainable, auditable, and accountable.
This shifts the core security question from “Where are you using AI?” to “What are you allowing AI to decide and can you explain it?”
Explainability will determine whether your systems run on ambiguous, manipulated logic and bias, or are regulated and transparent. Algorithmic decisions need to be followed by explainability, or else they’re another risk enabler instead of a defender.
11. The cyber-fraud divide continues to blur
Security in silos is now proving to be counterproductive. The approach is incompetent to handle adversaries shifting objectives: from single-outcome attacks to multi-risk, multi-vector operations.
Today, cyber techniques are used to access systems through compromised identities, data, devices, accounts, and infrastructure, which are then monetized by fraud. Visibility across the full criminal kill chain is essential — which is why the boundary between “cybersecurity” and “antifraud” will continue to blur.
Cybersecurity and fraud prevention convergence reshapes organizations’ risk evaluation and response strategies. From point-in-time inspections and rule-based controls to real-time detection combining behavioral monitoring, anomaly detection, and historical analytics for a multi-layered defense.
This transformation is leading to the shift toward Cyber-Fraud Fusion: a converged vision that helps your security teams, CISOs, and leadership not just view isolated threats within the network but complete attack schemes, paths, and the range used by modern cybercriminals.
If you want to learn about cyber-fraud fusion as an operating model and how it can protect your business from end-to-end attack chains, connect with Group-IB experts and explore our Fusion Center.
These risks aren’t distant; they’re already here: Are you ready?
As businesses grow more digital, automated, and decentralized, attack surfaces are expanding and becoming increasingly complex. We’re now seeing adversaries use it to converge, accelerate, and amplify core threats, and our human-paced, reactive security stance is no longer sufficient. It’s time to leave the past approach behind and embrace the future, a future defined by fusion, predictive prevention, and tailored intelligence and defenses.
Build a strong response against upcoming autonomous, multi-vectored threats with Group-IB.
- Shift towards fusion by redefining your strategy into a Cyber-Fraud Fusion model for real-time and collective visibility, detection, and action against threats.
- In the era of AI vs. AI, make sure your side can defend against current as well as emerging tactics. With Group-IB AI-driven technologies, embedded conveniently into a unified risk platform, constantly upgrade your intelligence and security ecosystem against external and internal infrastructure weaknesses, brand identity threats, and adversary TTPs.
- Group-IB’s GLOCAL approach (global presence, local expertise) helps you access the industry’s largest and most current Threat Intelligence (TI) base, which is tailored in real time to your local cyber landscape. Our global neural network of Digital Crime Resistance Centers (DCRCs) helps to constantly enrich our defense systems to hunt threats across regions, tailored into indicators specific to your business — all while respecting data localization and privacy laws.
- Receive actionable insights and support from our local cyber defenders in every major region, ensuring accuracy and nimbleness in every threat assessment and response plan.
- Group-IB’s exclusive partnerships with law enforcement agencies worldwide help turn strategic threat intelligence into collective defense against the criminal organizations targeting regions, industries, and your business.
For more information, reach out to our experts to strong-arm your defenses for what’s coming. Have a cyber safe and secure 2026!






