IT Brief US - Technology news for CIOs & IT decision-makers
Flux result 6a383ca8 e0b8 4e5a 92cc b62ebf47add1

AI at scale outpaces data trust, warns MIND study

Wed, 8th Apr 2026

MIND has released research, produced with the CISO Executive Network, on the role of data trust in AI initiative success.

The findings highlight a gap between the spread of enterprise AI and the controls used to secure and govern the data behind it. The research found that 90% of organisations are running enterprise generative AI at scale, while 65% of Chief Information Security Officers lack confidence in their data security controls.

It also found that only 20% of AI initiatives meet their intended key performance indicators, suggesting many companies have moved AI into day-to-day use before putting in place the data foundations needed to manage risk and support performance.

Survey Findings

The study was based on a survey of 124 CISOs and 20 in-depth interviews with members of the same group. It identified a consistent pattern: organisations have adopted AI policies but struggle to enforce them at machine speed.

In many businesses, data estates remain unclassified and ungoverned. Existing security frameworks were designed around human behaviour rather than autonomous systems, leaving security teams under pressure as AI adoption rises.

Nearly two-thirds of CISOs reported low confidence in their ability to prevent unsafe AI data access. At the same time, pressure from the business to increase AI adoption continues to grow.

MIND defines data trust as the degree of confidence that systems, including AI systems, use data safely and appropriately. The report argues that higher levels of trust allow organisations to move faster, while lower trust can slow projects, stall deployment or introduce risks that outweigh the benefits.

This framing places data security closer to operational and commercial decision-making, rather than treating it solely as a compliance or control function. Organisations with stronger data foundations are better placed to advance AI programmes, while weaker foundations increase the risk of stalled initiatives, regulatory exposure and business disruption.

"AI has moved beyond experimentation. It is operating at scale, often without the data foundations required to support it," said Eran Barak, Co-Founder and Chief Executive Officer of MIND. "What we're seeing is a structural gap between speed and control. Data trust closes that gap. It allows organizations to innovate without introducing unseen risk, and to scale AI with confidence rather than hesitation."

CISO Concerns

The research comes as security leaders face growing pressure to support AI deployment across large organisations while managing concerns over data exposure, governance and accountability. The findings suggest many CISOs see AI less as a standalone technology issue and more as a test of whether existing data management and security practices can cope with automation at scale.

The CISO Executive Network said conversations with its members reflected that concern. For senior security executives, the issue is not simply whether AI can be adopted, but whether it can be adopted safely enough to deliver business value without creating new forms of operational and regulatory risk.

"The conversations we're having with our member CISOs are consistent," said Bill Sieglein, Founder and Chief Operating Officer of the CISO Executive Network. "They know AI will drive competitive advantage, but they worry about the risks. Data trust has become one of the important deciding factors between those who move forward safely and those who struggle."

The report presents AI as a stress test for current security arrangements. In this view, companies that already understand where sensitive data sits, how it is classified, and who or what can access it are more likely to avoid disruption as AI use expands.

By contrast, organisations with fragmented or poorly governed data environments may find that AI magnifies existing weaknesses. The research links this to lower confidence among CISOs and to the low proportion of initiatives that meet performance targets.

MIND's broader view is that data security should be treated as an enabler of AI use rather than a barrier. If organisations can understand and act on data risk in real time, they are better placed to scale AI systems without losing control of sensitive information.

The CISO Executive Network is a membership organisation for senior information security executives in the United States, with more than 2,000 members across more than 30 regional chapters. Its involvement in the study gives the findings a direct link to the concerns of security leaders working inside large and mid-sized organisations.