AI reshapes data privacy as firms shift to real-time defence
Cybersecurity and data infrastructure leaders are warning that traditional, compliance-led approaches to privacy are breaking down under the strain of rapid cloud adoption and large-scale deployment of artificial intelligence, as organisations reassess their data practices during Data Privacy Week.
They point to expanding attack surfaces, weak visibility over sensitive information and disjointed access controls as key concerns for security and privacy teams in 2026.
Moving data
As organisations push more workloads into cloud platforms and integrate software-as-a-service and AI tools, executives say the problem has shifted from knowing where sensitive data sits to managing how it is used in real time.
Yair Cohen, Co-Founder and VP Product at Sentra, said many organisations still treat privacy as a point-in-time exercise even as data flows become more dynamic.
"Data Privacy Day is a good reminder that finding sensitive data is only the first step. In modern environments, data is constantly moving across cloud platforms, SaaS applications and AI workflows. Privacy breaks down not because organizations don't care, but because access decisions are often disconnected from how data is actually being used.
"What matters now is governing access in real time. Organisations need to understand who or what can touch sensitive data, whether that access is still appropriate, and how it changes as systems evolve and data moves. As AI and automation become part of everyday operations, privacy cannot be enforced once a year during an audit. It has to be maintained continuously.
"The organizations that succeed will treat data privacy as an ongoing responsibility, built into how data is accessed and used and whether the security posture of that data is continually maintained, not as a compliance checkbox that gets revisited after something goes wrong," said Cohen.
Security teams are increasingly embedding monitoring and access governance into daily operations. They are moving away from relying mainly on annual audits or certification cycles that struggle to keep up with constantly shifting data flows.
Operational discipline
In highly regulated sectors, including defence supply chains, practitioners say privacy failures typically result from a series of accumulated weaknesses rather than a single misstep.
Cyrus Robinson, VP of Security Operations at C3 Integrated Solutions, said many incidents start with basic gaps in data hygiene and visibility.
"In highly regulated sectors like the defense industrial base, data privacy issues usually arise when organizations lose track of where sensitive data resides, how it moves, and who has access to it. In our incident response and security operations work, problems rarely come from a single breakdown. More often, they stem from familiar gaps such as misclassified data, overly broad access, or environments where visibility hasn't kept pace with cloud adoption and third‐party integrations.
"Data Privacy Week highlights that protecting sensitive information requires consistent, day‐to‐day operational discipline. Attackers are increasingly leaning on MFA bypass techniques, credential theft, and zero-day vulnerabilities in edge systems to get to the data organizations assume is protected.
"The teams that are most effective treat privacy as an ongoing, ever-evolving set of operational responsibilities. They build alignment around how data is handled, which controls, monitoring, and response approaches fit their environment, and how to maintain a unified picture across security and compliance teams. When everyone operates from a shared understanding, organizations are far better positioned to reduce exposure and act decisively when an incident occurs," said Robinson.
Specialists say this alignment across compliance, security operations and IT functions is becoming critical as regulators in multiple jurisdictions increase scrutiny of how businesses safeguard customer and employee information.
AI attack surface
The rapid deployment of generative AI and machine learning across business units is also reshaping privacy risk models. Security leaders are focusing on data exposure through model training pipelines, embeddings and AI-driven automation.
Dr. Srinivas Mukkamala, CEO at Securin, said organisations often underestimate how different layers of an AI system combine to create new paths to sensitive information.
"AI doesn't leak data by accident; it leaks data through exposure. Every training pipeline that ingests sensitive records, every embedding that preserves latent meaning, every API that over-trusts prompts and every autonomous agent that chains actions together expands the AI attack surface. Adversaries don't need a single catastrophic flaw. They exploit weakness chaining - a permissive data source, an overexposed model interface, a misconfigured access control - and suddenly sensitive data can be inferred, exfiltrated or reconstructed without tripping traditional controls.
"On Data Privacy Day, the message is straightforward. If you don't understand your AI attack surface - and how CVEs, CWEs, misconfigurations and design assumptions combine across it - you don't control your data. Privacy survives only when AI systems are defended the way attackers see them: end to end, continuously validated and tested against real exploit paths rather than theoretical assurances," said Mukkamala.
Security teams are beginning to apply familiar concepts from vulnerability management and application security testing to AI pipelines. They are mapping data flows through training, inference and integration layers and testing for potential data leakage routes.
Offensive security
Some organisations are turning to offensive security methods and crowdsourced testing in response to more complex threats and the speed of AI-enabled attacks.
Bertijn Eldering, Associate Sales Engineer at HackerOne, said security leaders are expanding their toolkits as criminals and researchers alike incorporate AI.
"Data is one of the most valuable - and targeted - assets in today's digital economy. Yet many organizations still struggle to protect it effectively. As cyber threats grow in complexity, especially with AI in the mix, we're seeing a clear shift: security leaders are embracing offensive security strategies like bug bounty and AI Red Teaming engagements to stay ahead of risk.
"AI is both a challenge and a catalyst. As organizations integrate AI into products and operations, their attack surfaces expand. At the same time, AI is enhancing the precision and speed of security researchers. In fact, 65% of researchers are already using AI in their workflows, helping uncover threats traditional tools might miss. At the same time, we see that these tools will empower bad actors to up their game as well. Security is becoming a race where you need to do everything you can to stay ahead.
"That's the power of combining human ingenuity with AI capabilities - together, they create a more resilient, proactive defense. When paired with a crowdsourced approach, this strategy doesn't just respond to threats; it continuously surfaces the exposures that matter most, helping you stay ahead in the race," said Eldering.
Industry practitioners expect privacy and security programmes to place greater emphasis on continuous validation, attack simulation and closer scrutiny of third-party and AI-related data flows over the coming year.