IT Brief US - Technology news for CIOs & IT decision-makers
Computer server surrounded by corrupted code symbolizing data poisoning cyber attacker shadow

One in four UK & US firms hit by AI data poisoning attacks

Tue, 23rd Sep 2025

Research has found that one in four organisations in the UK and US has experienced incidents of AI data poisoning in the past year.

The IO State of Information Security Report, which surveyed 3,001 cybersecurity and information security managers across the United Kingdom and United States, highlighted that 26% of organisations have been affected by attacks involving the corruption of training data for artificial intelligence (AI) systems. These attacks enable hackers to introduce hidden backdoors, intentionally degrade system performance or manipulate outputs for personal advantage.

AI data poisoning represents a shift in the cybersecurity landscape, with malicious actors now weaponising AI against organisations. Such attacks have the potential to undermine critical systems such as fraud detection and cyber defence measures, risking operational integrity and exposing businesses and the public to further threats.

The report also identified an increase in deepfake- and cloning-related incidents, with 20% of surveyed organisations reporting such experiences within the last twelve months. Additionally, deepfake impersonation during virtual meetings was cited by 28% of respondents as a growing risk in the year ahead.

According to those surveyed, AI-generated misinformation and disinformation top the list of emerging threats, with 42% of security professionals concerned about their impact on fraud and organisational reputation. Generative AI-driven phishing is another rising concern, mentioned by 38% of respondents. The use of unauthorised AI tools, sometimes referred to as "shadow AI", is also presenting challenges, with 37% of organisations admitting that employees are utilising AI systems without approval or formal guidance.

Shadow AI and security risks

Shadow IT, which encompasses the use of unapproved software or services within an organisation, currently affects 40% of respondents' workplaces. The report highlights that generative AI is intensifying this issue, particularly when AI is deployed without human supervision. Among those facing challenges in information security, 40% cited AI systems autonomously completing tasks without compliance checks as a key concern.

AI has always been a double-edged sword. While it offers enormous promise, the risks are evolving just as fast as the technology itself. Too many organisations rushed in and are now paying the price. Data poisoning attacks, for example, don't just undermine technical systems, but they threaten the integrity of the services we rely on. Add shadow AI to the mix, and it's clear we need stronger governance to protect both businesses and the public.

This was the assessment of Chris Newton-Smith, Chief Executive Officer of IO, who pointed to both the pace of adoption and insufficient oversight as drivers of increased risk.

The report details that 54% of organisations acknowledged deploying AI technology too rapidly, and now have difficulties scaling it back or implementing necessary security controls. Securing AI and machine learning technologies remains an escalating priority: 39% indicated this as a top current challenge, a significant increase from 9% the year prior. Additionally, 52% of those surveyed stated that AI and machine learning were hindering their security efforts.

Steps towards strengthening security

While concerns have grown, adoption of AI and related technologies for security purposes has also increased, with 79% of UK and US organisations now using AI, machine learning, or blockchain in their security frameworks - up from 27% last year. The survey found that 96% intend to invest in generative AI-powered threat detection and defence tools over the next year, 94% are planning to adopt deepfake detection systems, and 95% will focus on AI governance and enforcement of security policies.

The report highlights guidance from the UK's National Cyber Security Centre regarding the likely increase in effectiveness of cyberattacks enabled by AI over the coming two years. Newton-Smith emphasised the importance of adopting best practice frameworks, such as ISO 42001, to ensure companies can innovate responsibly and bolster resilience:

The UK's National Cyber Security Centre has already warned that AI will almost certainly make cyberattacks more effective over the next two years, and our research shows businesses need to act now. Many are already strengthening resilience, and by adopting frameworks like ISO 42001, organisations can innovate responsibly, protect customers, recover faster, and clearly communicate their defences if an attack occurs.

According to the report, growing investment and implementation of structured AI governance, detection technologies and clear policy enforcement are being prioritised as businesses seek to address both the new and existing risks associated with rapid AI adoption.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X