IT Brief US - Technology news for CIOs & IT decision-makers
Realistic masked hacker laptop dark room digital data flows ransomware ai

Agentic AI ransomware is on its way

Today

Agentic AI-enabled ransomware is not here yet, but likely will be very soon. I am talking this year or by 2026.

Here is why. 

What is agentic AI

First, it helps to define what agentic AI is. To do that, we have to start by defining what Artificial Intelligence (AI) is…and doing that is a bit like trying to nail the proverbial Jello to a wall. Everyone has a different definition, but here is mine:

AI is a system or service that is able to perform tasks that simulate "human intelligence" when learning, reasoning and decision-making.

Contrast that with classic IF-THEN statements that "hard-code" what a program can do. AI Large Language Models (LLMs) "consume" large amounts of data and use algorithms and goals to produce outputs. The outputs can be changed by consuming more or different information. Traditional programs have all the information they will ever "consume" and predefined decisions at the moment they are coded and published. AI can change its decisions and results based on new inputs. AI can make previously undefined decisions.

Generative AI is great at creating "synthetic" audio and video of fake or real people saying and doing things they really did not do or say. There are thousands of services that allow anyone to take someone's picture and six to 60 seconds of their voice and easily create an audio or video of that person saying or doing anything. There are AIs that allow anyone to create a fake person or to emulate a real person that can realistically engage with people in a meaningful conversation, where that person does not easily detect that the "person" they are interacting with is not truly human. 

Agentic means a software/service that uses separate, stand-alone but cooperating "modules" to meet a common goal. There is usually an "orchestrator agent" that directs the other agents to work toward a common goal.

A real-world allegory would be how most people build houses and buildings. Although one person might be able to do everything necessary to build a house or building by themselves, almost everyone hires a general construction manager (i.e., the orchestrator agent) that hires all the other specialists (e.g., construction, cement, electrical, plumbing, roofing, etc.) who probably perform their involved tasks faster and better, to create a better overall product. Agentic AI is AI that uses individual cooperating agents to accomplish goals better and faster.

Here's a generic graphic describing a mock agentic AI:

A ton of new software and services are being developed using agentic AI. A lot of older software and services are being rewritten/replaced with agentic AI versions. You do not need agentic AI to create better, faster, and more useful features, but agentic AI is certainly helping many companies and coders to do just that. Agentic AI's ability to automatically make different decisions and produce different outcomes based on new inputs gives it an automatic advantage over traditional software and services. So much so that most companies are only coming out with new developments using agentic AI (i.e., AI first!).  

AI-enabled attack

Last year, a lot of my talks included the following sentence: "AI is coming, but how you are likely to be compromised this year will likely not be AI." This is no longer true.

AI-enabled attacks are already happening. We already have a ton of social engineering and hacking being accomplished with AI. Seventy-five percent (75%) or more of today's phishing kits now include AI as a feature. AI can already perform social engineering attacks better than a human

Here's a mockup of agentic AI malware:

Many ethical hackers and bug hunters are already using AI-enabled "bots" to find vulnerabilities. "Hackbots" are already responsible for a large percentage of newly discovered vulnerabilities and many zero days. The future of hacking is using AI-enabled tools.

AI agentic ransomware is coming

AI agentic malware is coming, including AI agentic ransomware. AI agentic ransomware is a collection of AI bots that do all the steps needed to perform a successful ransomware attack, but faster and better. The AI-enabled agents look for potential targets, trying to find any missing patches they can exploit. Or they use AI-created deepfakes that are specialised for the victim, from what the AI learned by researching the victim using every online source it can. Any social media postings, any work-related postings, any data it can find on the victim in public and non-public sources will be scoured to craft the perfect social engineering scam for that particular victim.

Here's a mock-up of agentic AI ransomware:

In the future, anyone can more easily become a scamming victim.

The AI-enabled agentic malware will gain initial access, analyse the environment, determine how to maximise malicious hacker profits, and implement the attacks. Not just one attack, but a series of escalating attacks that maximise hacker profits. Perhaps it first starts out by commandeering computers to do crypto mining. Then it exfiltrates passwords that it can use for later attacks. Then it copies data that it can use for ransom bargaining or for later profits. Finally, if it chooses, it can initiate the encryption routine and ask for the ransom. Everything from beginning to end, including the payment and decryption, is handled by the agents. 

When it is finished, the AI will analyse how it did versus what it could have done better and update itself. Hackers sit back, spending their already laundered bitcoin. 

How do we know AI agentic malware and ransomware is coming?

Mostly because we have already seen how AI is used in hacking for a few years now and the near-term future is best predicted by past behaviour. The good actors invented AI (in 1955) and it really took off in November 2022 when OpenAI revealed ChatGPT. 

Since then, every improvement in AI has been created by the good actors and then tested by cybersecurity defence firms for how the latest AI development or improvement might be used by the bad actors to do bad things. The good actors were the first to test using AI to create more realistic phishing attacks. The good actors were the first to "jailbreak" AI into doing harmful things. The good actors were the first to create realistic deepfake attacks. And now the good actors are the first to use agentic AI for both defence and ethical hacking.

History shows that the bad actors follow about six to 12 months behind what the good actors invent and discover. It takes that long for the bad actors to learn what the good actors developed and then figure out not only how to use it maliciously, but place it into existing hacker tools and kits so a broad range of hackers can use them. 

Today, we have tools that allow anyone to craft realistic deepfake phishing messages. Today, we have real-time deepfake tools that can socially engineer better than humans. Today, we have agentic AI being used by nearly every cyber defence firm. Pretty soon we will start to see the first early versions of malware and ransomware using agentic AI. It is uncontainable. It is coming.

What is the defence?

So, what can you do to prepare and defend?

Well, you can start using agentic AI in your cybersecurity defences. Most cybersecurity firms are using agentic AI to create better and faster tools. Take advantage of it. Do not let the bad actors be the only ones using agentic AI to do new things. Make sure the tools and services you use are not only AI-enabled, but are aware of AI-enabled attacks. 

Teach your end users about AI-enabled phishing and AI-enabled deepfakes. Just as we used to have to teach users not to automatically trust any email sent to them, just as we had to warn users about malicious SMS messages, now we have to tell them to have a healthy level of scepticism about any digital audio or video they receive. What looks real may not be real. 

If the message is unexpected and trying to motivate you to do something you have never done before, research the request using known trustworthy methods before performing the request. The world is changing, and AI is going to make existing cyberattacks better than ever before. We have to prepare the same way.

The future will be good AI bots (e.g., threat hunters, patchers, defences, etc.) versus bad AI bots (e.g., ransomware, passwords stealing, etc.) and the best algorithms will win. Make sure you are taking advantage of AI agentic defences when you can.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X