IT Brief US - Technology news for CIOs & IT decision-makers
Cybersecurity strain ai medical device hospital control room network cables detailed digital illustration

AI medical devices widen healthcare cyber risk gap

Wed, 29th Apr 2026 (Today)

RunSafe Security has published its 2026 Medical Device Cybersecurity Index, which warns that AI-enabled medical devices are creating a new layer of cybersecurity risk in healthcare.

The findings are based on a survey of more than 550 healthcare decision-makers in the United States, the United Kingdom and Germany. The study looks at how hospitals and other healthcare organisations are adopting AI-enabled or AI-assisted medical devices while security practices and governance remain unsettled.

According to the report, the spread of AI in medical equipment is expanding the attack surface beyond conventional software flaws. It highlights risks such as model manipulation, adversarial inputs and data integrity issues, which introduce failure modes that existing security frameworks were not designed to address.

The research suggests healthcare organisations are already deploying AI-linked devices even though some acknowledge they do not fully understand or control the risks involved. That points to a gap between the pace of adoption and the maturity of security oversight in environments where clinical systems often rely on ageing infrastructure.

Governance gap

The report describes a pattern similar to earlier waves of technology adoption in healthcare, when cloud systems and connected devices were introduced faster than governance and monitoring standards could develop. It argues that AI systems are now being brought into clinical settings before procurement rules, security reviews and operational controls are fully defined.

Security teams are becoming more involved in AI-related purchasing and assessment decisions. Even so, standard models for evaluating the risks of AI-enabled systems are still evolving, leaving organisations to make judgments without widely adopted frameworks.

Legacy infrastructure adds to the challenge. Many AI-enabled systems are being layered onto environments that are unpatchable or unsupported, creating interconnected risk across clinical workflows and making traditional cyber defences harder to apply.

Defensive tools

Some healthcare organisations are turning to runtime protection and continuous monitoring where patching and older control methods are no longer enough. The report presents these measures as part of a broader shift towards securing systems in use rather than relying only on updates and perimeter protections.

The study comes as healthcare providers face continued scrutiny over cyber resilience, particularly where device security intersects with patient safety. Medical devices occupy a distinct place in hospital technology estates because they often remain in service for long periods and may depend on software or operating environments that cannot be easily changed.

This creates a difficult environment for AI deployment. New software models or AI-assisted functions may be introduced into equipment and workflows that were not originally designed with those risks in mind, while procurement, compliance and clinical teams must assess systems that behave differently from traditional software tools.

The report also suggests the challenge is not limited to a single type of threat. By pointing to model manipulation, adversarial inputs and integrity issues, it frames AI risk as a mix of software security, data trust and operational reliability rather than a narrow extension of existing vulnerability management.

RunSafe argues the implications extend beyond healthcare, as regulated industries are likely to face similar issues as they integrate AI into devices and operational systems. In healthcare, however, the consequences carry particular weight because disruptions can affect clinical workflows and medical decision-making.

Joe Saunders, Chief Executive Officer of RunSafe Security, said he could address why AI is breaking traditional security and governance models in healthcare, the emerging risks unique to AI-enabled medical devices, how healthcare organisations should rethink security for AI-driven systems, and what this signals for the broader AI adoption curve across regulated industries.