IT Brief US - Technology news for CIOs & IT decision-makers
Story image

Shadow AI use surges among cybersecurity teams, heightening risks

Yesterday

Shadow AI practices among cybersecurity professionals are increasingly becoming a significant concern as organisations struggle to keep pace with the rapid adoption of generative AI tools.

According to survey results from Mindgard, an AI testing company, over 500 cybersecurity professionals reported extensive use of AI applications such as ChatGPT and GitHub Copilot without formal oversight or organisational approval.

The survey, which included practitioners ranging from Chief Information Security Officers (CISOs) and Security Operations Centre (SOC) leads to hands-on security staff, revealed that Shadow AI is not limited to business users, but is prevalent within cybersecurity teams themselves. Shadow AI broadly refers to the unauthorised or unmonitored use of AI resources, much like shadow IT but with higher associated risks due to the nature of data being processed by these tools.

Unmonitored AI usage

Findings show that 86% of cybersecurity professionals have adopted AI in their workflows, while nearly a quarter (24%) admit to using personal accounts or browser extensions not sanctioned by their employers. Additionally, 76% suspect their colleagues are similarly incorporating AI tools for tasks such as writing detection rules, generating training materials, or reviewing code, outside approved channels.

The shadow use of AI creates specific risks, including exposure of sensitive or confidential business information. Approximately 30% of participants said their teams enter internal documentation and emails into generative AI tools, and a similar proportion admit to using customer or confidential business data for AI-driven analysis. One in five respondents admitted to inputting sensitive information, and 12% were unaware of the precise data being submitted.

Monitoring and oversight procedures are failing to keep up with the scale of AI adoption inside organisations. Only 32% of respondents indicated their company employs dedicated systems to monitor AI use. Another 24% rely on informal methods such as surveys or managerial reviews, while 14% confirmed that their organisations undertake no monitoring at all. This gap, according to the report, leaves entities vulnerable to data leaks, privacy breaches, and regulatory non-compliance.

Lack of clear responsibility

The survey also highlights widespread uncertainty regarding who is responsible for managing AI-related risks within organisations. While 38% of respondents pointed to security teams as taking the lead, 39% stated there is no designated owner for AI risk. Smaller proportions assigned responsibility to data science (17%), executive leadership (16%), and legal or compliance teams (15%).

"AI is already embedded in enterprise workflows, including within cybersecurity, and it's accelerating faster than most organizations can govern it. Shadow AI isn't a future risk. It's happening now, often without leadership awareness, policy controls, or accountability. Gaining visibility is a critical first step, but it's not enough. Organizations need clear ownership, enforced policies, and coordinated governance across security, legal, compliance, and executive teams. Establishing a dedicated AI governance function is not a nice-to-have. It is a requirement for safely scaling AI and realizing its full potential."

This statement from Peter Garraghan, CEO and Co-founder of Mindgard, underlines concerns about the absence of comprehensive governance structures around AI applications within corporate environments.

Broader industry context

The survey included cybersecurity professionals from a diverse mix of organisations, such as enterprises, small businesses, managed security service providers (MSSPs), and government agencies. More than 60% of respondents occupy management positions, offering insight from both strategic and operational perspectives.

The use of AI in security settings is intended to enhance operational efficiencies and accelerate responses to threats. However, the report suggests that without centralised controls, policy enforcement, and designated risk ownership, the benefits of AI could be counterbalanced by increased risks of sensitive data exposure and regulatory vulnerability.

Mindgard's findings point to the urgent need for coordinated approaches across technical, executive, legal, and compliance divisions to build effective AI governance models and align policies with organisational risk tolerance.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X