IT Brief US - Technology news for CIOs & IT decision-makers
Tense security analyst digital attack alerts operations center modern illustration

AI now powers most dangerous cyber threats, warns SANS

Wed, 29th Apr 2026 (Today)

SANS Institute has warned that artificial intelligence now underpins the most dangerous cyber threats it tracks, outlining its concerns in new commentary on attack trends and workforce pressures.

Researchers at the cybersecurity training body say the distinction between AI-enabled threats and traditional attack methods has largely disappeared. Generative models, automation and data-driven targeting are now routine features of serious incidents rather than experimental add-ons.

Ed Skoudis, president of SANS Institute, said the assessment draws on input from SANS authors, instructors, mentors and students who track emerging attack vectors. It points to a growing reliance by threat actors on AI for reconnaissance, exploitation and post-compromise operations.

"We would be lying if we stood on that stage and pretended there was still a meaningful line between AI-driven attacks and everything else. That line is gone. AI is now embedded in every serious threat we monitor, amplifying impact, accelerating execution and widening the gap between attacker capability and defender readiness. This year's presentation is a wake-up call: AI isn't just a tool anymore. It's the engine driving the next era of cyberattacks," said Skoudis.

SANS analysts have observed attackers using AI to refine phishing lures, generate convincing deepfake content and tune malware more quickly. They have also seen AI models used to probe software and cloud environments for weaknesses at speeds and scales beyond what manual techniques can achieve.

The shift raises concerns about the readiness of corporate and public sector defenders. Security operations teams must now respond to incidents that evolve faster than human analysts can triage alerts and correlate signals.

New data from the SANS 2026 Workforce Report underscores the pressure on organisations trying to protect themselves in this environment. The research points to a growing need for staff who understand both security and AI, as well as regulatory and compliance requirements.

The study also highlights an intensifying battle for talent. Cybersecurity leaders are competing for specialists who can manage AI-driven detection tools, validate model outputs and address privacy and governance risks.

Rob T. Lee, chief AI officer and chief of research at SANS Institute, said the current threat landscape marks a break from previous years, with AI no longer a niche focus confined to advanced persistent threat groups or experimental red teams.

"We've never sounded an alarm like this before. For the first time in history, AI isn't an emerging theme. It's the common denominator across the most dangerous attack techniques we track. Attackers are using AI to move faster than human defenders can think, and the organisations that fail to adapt aren't just at a disadvantage. They're unprepared for the world they're already living in," said Lee.

As AI becomes embedded across phishing, malware, reconnaissance and exploitation, SANS argues organisations can no longer treat artificial intelligence as a future cyber concern. Instead, it now defines the operational reality of modern security, forcing businesses and governments to strengthen workforce capabilities, governance and defensive strategies at pace.