Morphisec adds AI defence to anti-ransomware suite
Morphisec has launched Adaptive AI Defence, a new layer in its anti-ransomware suite designed to identify unauthorised AI tools, block compromised AI agents and stop ransomware attacks linked to autonomous software before they run.
The New York cybersecurity company is addressing a fast-growing concern for large organisations as staff adopt AI coding assistants and general-purpose chatbots with little direct oversight from technology teams. Tools such as GitHub Copilot, Claude Code, Cursor and ChatGPT are increasingly used in day-to-day work, creating what security vendors describe as a new attack surface across laptops, servers and cloud workloads.
The product is designed to find so-called shadow AI on endpoints and workloads, govern the use of approved and unapproved tools, and monitor runtime behaviour for signs that an AI agent has been manipulated or is acting outside normal patterns.
That focus reflects a broader shift in cyber risk. As AI systems gain access to source code, files, external application programming interfaces and internal business systems, they can operate with trusted credentials and broad permissions. A malicious or compromised agent may therefore appear legitimate to conventional security products built to detect known indicators of compromise rather than suspicious use of approved tools.
Attack speed
Morphisec argues that AI-assisted and autonomous attacks are shrinking the time available to detect and contain threats. Traditional detection-led systems, it says, were not built to respond when malicious actions unfold in seconds rather than hours.
Its answer is a prevention-led model. Adaptive AI Defence is intended to stop unauthorised AI agents from being installed or executed, detect behavioural drift during runtime and intervene when activity suggests data exfiltration, abnormal API usage or other signs of misuse.
The product also integrates with existing endpoint detection and response, or EDR, and extended detection and response, or XDR, systems. That means customers are not being asked to replace existing monitoring tools, but to add a layer intended to close gaps where approved applications or in-memory attacks may evade detection.
Michael Gorelik, chief technology officer at Morphisec, said the company sees AI as changing both the speed and nature of ransomware threats.
"AI has collapsed the attack timeline from hours to seconds," Gorelik said.
He added: "Adaptive AI Defence gives security teams instant clarity into what happened, how to remediate it, and how to harden the endpoint before it happens again. As AI itself becomes a new attack surface, we're going further-detecting compromised AI agents and shadow AI before they become the next ransomware delivery mechanism."
Suite expansion
The launch broadens Morphisec's Anti-Ransomware Assurance Suite, which consists of five connected layers, according to the company. Alongside the new AI-focused feature, the package includes exposure management, infiltration protection, impact protection and recovery tools intended to restore operations after an incident.
The suite is designed for endpoints, virtual machines and cloud environments. In practice, that places the product in a crowded ransomware defence market, where vendors are trying to distinguish themselves through combinations of prevention, detection, backup, recovery and identity controls.
Morphisec's position is that unauthorised AI use inside companies now requires direct visibility and policy control, much like software installations, browser extensions and access rights.
Kobi Katzir, head of product management at Morphisec, said the issue is not only that AI tools can help attackers move faster, but that the tools themselves may become the route into an organisation.
"The threat isn't just that AI makes attacks faster; it's that the AI tools that organizations trust are becoming the attack," Katzir said.
He added: "A compromised AI agent with legitimate credentials and broad file access doesn't look like a threat until it's too late. Adaptive AI Defence closes that gap, preemptively and permanently."
Market backdrop
The announcement comes as corporate security teams face pressure to set rules for the use of generative AI products by employees, contractors and software developers. In many organisations, adoption has moved faster than internal policy, leaving gaps in visibility over which tools are installed, what data they can access and which external services they connect to.
That has prompted security suppliers to frame AI not only as a productivity tool but also as a governance issue, particularly where agents can act on behalf of users. The concern is that if an attacker can hijack a trusted agent, they may be able to move through systems, access files and trigger destructive actions without tripping conventional alarms.
Ron Reinfeld, chief executive officer at Morphisec, said the company believes that model requires a different approach from established security workflows.
"Detection is outdated. Prevention is peace of mind," Reinfeld said.
He added: "With Adaptive AI Defence, we're solving the next great cybersecurity challenge-AI-driven ransomware and autonomous threats that evolve faster than humans can respond. Morphisec's approach stops them before they execute, providing organizations with true cyber resilience in the AI era."
Adaptive AI Defence is now available as part of Morphisec's Anti-Ransomware Assurance Suite.