Gaining control: The human role in AI-driven automation
A new Pew Research survey of American adults found that half of the respondents are more concerned than excited about the growing use of AI in daily life. Many network infrastructure owners I speak with feel the same way about using AI-driven automation to manage networks. The network is the heartbeat of the organization, so it must be up and running. Worrying if new AI-based capabilities in their fintech platform, manufacturing environment, or hospital IT system can deliver at the pace and accuracy required keeps them up at night.
Concern is mounting, and it's warranted. Most organizations didn't start today with a blank slate and are ready to try something new. Mature businesses have processes and audit cycles in place, using technology as an enabler. Introducing AIOps into the mix to seemingly "solve problems" with little transparency about what AI is doing and how, creates uncertainty. NetOps teams become fearful of failures that could disrupt business continuity or cause compliance challenges.
It's not surprising that the tide is shifting from "We want solutions that use AI" to "Can we turn AI off?"
We need to flip the script and ask, "How can I use AI and still have control?"
Expect Control
Using AI shouldn't entail lowering the bar on control. Humans should still expect control, without requiring NetOps teams to become expert prompt engineers. Sure, the better the prompt, the more reliable the outcome. But when the prompt isn't tightly scoped and AI makes assumptions and runs with them, outputs can be invalid and potentially harmful.
AI is currently most effective in roles where a limited number of errors can be tolerated, rather than in deterministic operations where network outages could occur. To prevent errors in outputs, AI-driven systems must be built to maintain control by the people who own the infrastructure, business systems, and processes, and enable their success.
Imagine an AI implementation that presents the assumptions AI is using, recommends actions, and asks for permission to proceed. Network administrators contribute their experience to the prompt, establishing guardrails and caveats to introduce processes and prevent failures. These might include knowledge of other network tasks that always happen during a specific window that could lead to performance issues if conducted concurrently, or device upgrades that could trigger an outage and need further testing by the team before being implemented.
A system that learns from your experience and integrates it into the system, puts humans back in control to validate or change assumptions. When AI is treated as a team of interns guided by trusted, senior engineers, AI becomes a valuable tool for enhancing network cyber resilience. The AI-driven task does exactly what the human would do, only faster and more reliably because it never tires.
A Lesson in Trustworthy AI
The struggle to keep up with common vulnerabilities and exposures (CVEs) is well documented and so are the consequences. The recent breach of critical infrastructure by a state-sponsored hacker group which exploited both recently disclosed vulnerabilities and unpatched older flaws in enterprise network gear is a notable example. Fortunately, vulnerability management is an area where trustworthy AI enhanced by human-centered control excels.
Vulnerability data is accessible from multiple sources, including the National Vulnerability Database (NVD) maintained by the National Institute of Standards and Technology (NIST), the Cybersecurity and Infrastructure Security Agency (CISA) KEV catalog, and device vendor advisories that detail affected device versions, impact, workarounds, and available patches or updates.
AI can be used to provide contextual information about CVE severity through a data feed that integrates information from CISA, NVD, NIST, and vendor websites. Rather than visiting multiple sites to stay updated on the latest vulnerabilities, AI-driven network automation tools can serve as a single, consistent source of truth for vulnerabilities relevant to the organization's network infrastructure, down to the make, model, and OS version of devices on the network. Since practitioners know the data is from reliable, publicly available sources, they can trust the output. AI is solving a complex problem in minutes that would otherwise take days or weeks to do manually.
AI can also be used to generate a list of vulnerability remediation tasks prioritized by those that have real and immediate impact to the organization - a valuable context that would take many human hours and critical thinking about many different data points to obtain. Network engineers can prioritize their mitigation strategy based on potential impact without spending time identifying which apply to their devices, enabling them to reduce the greatest amount of risk in the shortest timeframe. Adding more guardrails, they can take the next step and use AI to automate remediation with confidence.
Move Forward with Confidence
So, back to the question we should be asking: How can I use AI and still have control? The answer is to say no to a free-for-all environment. Instead, raise the bar and demand that systems offer transparency and incorporate checks and balances to establish guardrails. Human-centered control enhances performance and the reliability of outputs, so that AI-driven automation can be used responsibly and reliably. Network infrastructure owners gain confidence, sleep better, and receive real value. Now that's something to get excited about.