Like every other technological domain, the cyber threat landscape of 2025 could not escape the gravitational pull of AI. In 2026, however, artificial intelligence is expected to move from buzzword to daily reality for CISOs and security teams. AI is creating new frontlines: it accelerates and sharpens attacks, enhances defensive and response capabilities, and increasingly becomes a target in its own right, from models to autonomous agents connected directly to organizational systems.
Yet beneath the noise, the core principles remain familiar. There are assets, there are perimeters - physical, logical, and operational - and there are identities, permissions, and network movement that must be controlled. If AI is the technological engine, the cyber threat map of 2026 will be shaped just as much by geopolitics, war and terrorism, a digital arms race between nation-states, and the financial incentives that continue to fuel industrial-scale cybercrime.
To chart this landscape, we brought together leading cyber experts from top Israeli and global security companies operating across complementary domains, from identity and cloud security to digital assets, AI security, and industrial infrastructure protection, in an effort to connect the dots into a single picture of the threats and opportunities ahead in 2026.
AI as an accelerator and a defensive backbone
The word that defines nearly every conversation about 2026 is speed, but also a shift in the balance of power.
"The integration of artificial intelligence is redefining cyber risk, but the real opportunity lies on the defensive side," says Wendy Whitmore, Head of Intelligence Security at Palo Alto Networks. Attackers are already operating at a new pace, using AI agents that accelerate attacks and blur the line between human and machine. As a result, defenders must respond "at the same speed, with proactive, intelligent, preventive security."
According to Whitmore, this requires "a shift from slow, manual response processes to capabilities that detect, decide, and stop threats in real time."
She describes 2026 as a potential inflection point at which defense may, for the first time, overtake offense, if organizations change their mindset. "No more security incidents handled through committees and ticket queues," she stresses, but rather systems that generate a continuous operational advantage. At the same time, Whitmore warns of a looming trust crisis in data, poisoning that seeps into models and AI applications, along with a growing layer of legal liability. In her view, the gap between the pace of AI adoption and the pace of securing it is likely to trigger landmark lawsuits. "This is a paradigm shift," she concludes, "one that forces organizations to rebuild trust around AI if they want to continue innovating safely."
That vision, however, collides with real-world constraints: budgets, talent, and adoption speed.
"If 2025 was the year we fell in love with AI, 2026 will be the year we pay the price for hackers learning to use it faster and more effectively than we did," warns Ryan Knisley, Chief Product Officer at Axonius, a cyber unicorn specializing in asset management and security. While large enterprises form committees to approve new defensive tools, he says, attackers are already using AI to breach systems, steal data, and disrupt infrastructure, unrestrained. Knisley also points to mounting budget pressure, predicting sharp cuts to cyber spending that will force security teams to "do far more with far fewer resources." The conclusion, in his view, is unavoidable: without automation that detects and neutralizes threats, organizations will enter 2026 more exposed precisely as attacks become faster and more industrialized.
The thin line between automation and loss of control
To bring clarity to the role of AI in cybersecurity, Niv Braun, CEO and co-founder of Noma Security, suggests splitting the discussion into two worlds. "On one side, there's the use of AI for attacks. That's not a new threat, just a new tool. On the other side, there's securing AI systems themselves." There, he identifies three layers: the AI supply chain, natural-language risks such as prompt injection and manipulation, and the agents themselves. "An agent is an application that can take actions," he explains. "That dramatically expands the blast radius." An agent authorized to move money may not represent a fundamentally new attack vector, but the potential damage escalates sharply.
Dror Kashti, CEO of Sweet Security, which specializes in cloud security, ties this directly to infrastructure. "The combination of cloud and AI is an enormous force multiplier," he says. "It unlocks productivity, but at the same time dramatically expands the attack surface, especially through AI agents." Kashti expects agents to become tools for attackers as well in 2026, with poisoning emerging as a major risk. "Agent poisoning is extremely dangerous," he warns, "because these agents are connected to so many organizational tools." In other words, no longer a point vulnerability, but an entity deeply embedded in permissions and data flows, capable of multiplying damage.
Asaf Yaakobi, VP of Cloud at Sela, adds: "In 2025, cloud security had to reinvent itself. We realized that the real challenge is no longer just preventing server breaches, but protecting the most valuable asset: the models and the data that feed them." Over the past year, he notes, attacks have become significantly more complex and AI-driven, forcing organizations to move from manual defense to autonomous, cloud-native security systems. "Cloud security has become an integral part of DevOps and development processes, not a gatekeeper at the end of the pipeline."
"Looking to 2026, the dominant trend will be AI-native security," Yaakobi predicts. "We'll see a shift toward architectures that detect anomalies in real time within cloud data flows and neutralize them before damage occurs. In a world where identity is the new perimeter and the cloud is the primary growth engine, security must be transparent, fast, and smarter than the attacker."
Avichai Natan, Head of AI, Data, and Research at CyberArk, brings the discussion back to fundamentals. "Cyber is still cyber. At the end of the day, the hacker wants access to the organization, through a human weakness or an organizational one." What has changed is the path. The standout trend, he says, is the rise of highly sophisticated fraud powered by personal data and social engineering, enabled by generative AI. "AI is excellent at finding personal information and at generating personalized content, a convincing email, a tailored message, a realistic image, even forged handwriting." The barrier to entry has nearly vanished: what once required expertise is now an off-the-shelf capability.
The incentive, Natan emphasizes, is economic. "AI dramatically lowers the cost of building attack infrastructure," turning even amateur hackers into far more capable adversaries. At the same time, he cautions against security overload. "The ability to defend exists, the question is how to apply it correctly." Here, AI can work in the defender's favor by enabling deeper behavioral analysis and true anomaly detection. The weakest point, in his view, is the world of agents. "Agents are black boxes, which makes them prime targets." The key, he argues, is identity security, for both humans and AI entities, and the ability to detect gaps between granted permissions and actual behavior.
Beni Lakonishok, CEO of Zero Networks, which specializes in automated network microsegmentation, urges a calmer perspective. "Contrary to popular belief, I don't think AI fundamentally changes the threats." The only real shift, he argues, is productivity: "AI allows individual, inexperienced hackers and small groups to write malware and build attack tools faster. It raises efficiency across the board, including for attackers, but it doesn't change the basic rules."
Lakonishok also points to the tendency to turn AI into a marketing slogan. "Everyone claims they're doing AI," he says, noting that when he tells customers he offers solutions "without AI bullshit," they often applaud. Even when it comes to agents, his conclusion is blunt: "At the end of the day, an agent is just code." Control movement, control permissions, and limit lateral progress, that's what matters.
When trust becomes a technical variable
The front where trust itself becomes a technical issue is social engineering. Lior Lamesh, CEO of GK8, which develops digital asset security platforms for financial institutions, describes a stylistic shift: "2026 marks the transition from lone hackers to AI-powered crime factories." In his view, "the next major threat is a breach of human consciousness." Instead of classic phishing, he predicts live video calls with real-time deepfakes. The person on the screen, he explains, "doesn't just look and sound real, it can conduct complex conversations and adapt in real time to persuade someone to perform a sensitive action." The result is industrial-scale fraud that exploits the human weak point precisely when the deception becomes nearly perfect.
"We can no longer rely on what we see or hear," Lamesh concludes. Defense must move to a technological layer that enforces independent verification for every critical action.
A digital arms race and civilian infrastructure under fire
Up to this point, the story appears to revolve around AI, agents, identities, and fraud. But to understand 2026, it's essential to remember that cyber conflict doesn't occur only inside data centers, it unfolds in an increasingly unstable world where states, wars, and terrorism shift battle lines and bring new players onto the field.
Shai Nahum, CEO of Cyght, offers a clear map of nation-state actors in global cyberspace. "The leading players are China, Russia, North Korea, and Iran," he says, recalling how Russia used a supply-chain attack in SolarWinds to reach 18,000 customers worldwide while remaining under the radar for nearly a year. "In Israel, a scenario like that would be a doomsday event," Nahum warns.
Iran, he adds, has been highly active in Israel, particularly since October 7. "The goals are technology theft, infiltration of internal targets, and cyber terror, psychological warfare." Groups such as Handala leak personal documents and addresses of employees in defense industries to encourage physical harm, and publish information about prominent political figures, such as former Prime Minister Naftali Bennett or Tzachi Braverman, a close associate of Prime Minister Benjamin Netanyahu, to destabilize Israeli society. "This means everyone needs to be aware of these risks, not just large companies and government organizations."
Amir Preminger, CTO of Claroty, which focuses on protecting industrial and operational environments, describes "an interesting moment globally" in which many actors seek to claim cyber capabilities. "Cyber has no borders… it takes very little to create impact," he says, emphasizing that even small actions can generate psychological effects and public deterrence. As a result, he expects more small players targeting exposed, internet-connected assets.
His main concern lies at the junction between IT networks and operational systems. "That's the weak point that can expose organizations like hospitals." As the arms race accelerates, Preminger expects more states to develop offensive and espionage capabilities, alongside "a growing number of third-party groups operating under state sponsorship." In wartime, he notes, cyber often targets civilian infrastructure, requiring coordination, public communication, and sometimes legislation, not just technology.
Shaul Philos, CEO of CYBERcom at the EMET Group, which provides managed cyber services, describes 2025 as a turning point when cyber became "a material business and operational risk." "The core problem is lack of visibility," he says, stressing that AI-driven attacks are "silent and intangible until damage is done." There is no way to counter offensive AI without defensive AI, he argues, and without systems that operate at machine speed rather than human pace. Philos points to a shift from point solutions to unified security platforms, and toward managed services such as MDR, driven by talent shortages and the need for SOCs capable of broad, cross-organizational defense.
Uzi Baruch, Managing Partner at Real Numbers, which advises cyber companies from a strategic and financial perspective, expands the lens further. He describes a structural convergence between defense, technology, and civilian infrastructure, reflecting the understanding that cyber has become a national infrastructure. Demand for tactical and infrastructural solutions, he says, is now a strategic necessity, creating an advantage for Israeli companies able to innovate quickly and adapt to real-world conditions, while also blurring the line between defense and civilian markets.
Hod Bin Noon, co-founder of the cybersecurity company MIND, highlights another domain where AI will reshape cyber: regulation. "Information security must become unified, flexible, and above all context-aware, able to understand not just what is happening, but why, and in what environment. One of the clearest expressions of this is regulation. In 2026, AI regulation will no longer be a future scenario but a daily reality, even if it arrives in imperfect and inconsistent forms. Regulators won't dictate which technologies organizations must use, but they will demand simple, clear answers: Who made a decision? Based on what data? And which data was exposed or affected along the way?"

Organizations that continue to rely on opaque, siloed security systems, he warns, will quickly discover how difficult, sometimes impossible, it is to provide those answers in real time, especially as regulation itself continues to evolve. "That leads to the bottom line for the coming years: automation is not a luxury, it's a prerequisite," Bin Noon adds. "The boundaries between human and machine, between access and action, and between content and context will continue to blur. Security that doesn't understand risk continuously, in real time, and in context simply won't keep up."
When all the threads are woven together, a simple conclusion emerges: 2026 will not be "the year of AI" in the narrow sense, it will be the year of governance. Whether the challenge is identity and fraud, agents and supply chains, or a digital arms race spilling into civilian infrastructure, everything converges on the same principle: less magic, more control. Organizations that enter 2026 with real visibility, strict permission management, continuous monitoring, and verification mechanisms that reduce reliance on humans will not only mitigate risk, they will be able to adopt AI in a way that serves the organization, rather than opening yet another door for attackers.



