AI adoption in healthcare surges but exposes critical risks
The mandatory rollout of ChatGPT across all Department of Health and Human Services divisions has highlighted an accelerating adoption of artificial intelligence in healthcare and pharmaceuticals, while recent research suggests this surge is happening without adequate safeguards.
According to an S&P 500 study by Cybernews, healthcare and pharmaceutical sectors are rapidly embracing AI to enhance diagnostics and drug discovery. Statista data underscores this trend: the AI healthcare market was valued at USD $11 billion in 2021 and is projected to reach USD $187 billion by 2030. This shift points towards significant transformations in the operations of medical providers, hospitals, pharmaceutical, biotechnology companies, and the broader healthcare industry.
Sector vulnerabilities
Despite the promise of efficiency and new discoveries, the speed of AI adoption is causing concern among experts. While 83% of doctors believe AI will ultimately be positive for healthcare, 70% have serious reservations about relying on AI for diagnosis, fearing that mistakes could have large-scale consequences. An error within a single medical algorithm can affect thousands of patients at once, potentially escalating a routine clinical mistake into a widespread healthcare issue.
Žilvinas Girėnas, Head of Product at nexos.ai, explains that this risk of systemic failure is particularly high due to data bias and the 'black box' design of many AI models. He states:
"If an algorithm is trained on data that does not represent certain populations well, it won't just be wrong - it will also reinforce health disparities. Many AI models act as 'black boxes.' This means they cannot show clinicians which specific data points, such as a certain lab result or a slight shadow on an X-ray, influenced their conclusion. This puts doctors in a difficult situation. They must either trust an output they cannot verify or miss a potentially life-saving insight."
Exploring the risks
The rapid uptake of AI has drawn attention from security specialists who warn that healthcare's AI journey may be taking place without sufficient stability. The Cybernews review of S&P 500 organisations found 149 possible security weaknesses among 44 healthcare and pharmaceuticals companies using AI. The research listed healthcare as one of the top three sectors most exposed to AI-related vulnerabilities.
Healthcare's distinct challenge is that while other industries may lose revenue or suffer brand damage after an AI failure, in healthcare, each vulnerability could place human well-being directly at risk. The Cybernews data pointed to specific threat categories: 28 cases of insecure AI outputs, 24 incidences of data leaks, and 19 direct risks to patient safety stemming from errors capable of propagating through entire hospital systems.
Intellectual property is also at risk. Deals that involve AI-driven drug discovery can be worth up to USD $2.9 billion, with development cycles lasting more than a decade. An unsecured AI application exposing proprietary research could wipe out not only years of scientific progress but also billions in potential earnings.
Girėnas comments on the nature of potential threats:
"The biggest AI threat in healthcare isn't a dramatic cyberattack, but rather the quiet, hidden failures that can grow rapidly. These risks include data poisoning, where a tampered dataset subtly disrupts thousands of future diagnoses, and untested algorithms delivering bad recommendations across a whole hospital network."
He raises concerns about governance and accountability:
"Leaders must ask themselves who is responsible when AI is wrong rather than what happens if AI is wrong. Currently, there's a serious gap in accountability for algorithms, and without a system to monitor and manage how these tools are used, organisations are putting their patients and valuable information at serious risk."
Mitigating exposure
In order to balance rapid AI innovation with the need to minimise risk, healthcare and pharmaceutical companies are advised to create dedicated AI governance layers. Such governance should offer more than just access controls: it requires comprehensive oversight to ensure safe and compliant use of AI technologies.
Girėnas identifies three foundational strategies to increase safety:
Firstly, organisations should establish an approved list of AI tools by maintaining a centrally managed whitelist of AI models that have been checked for clinical precision and safety. This step ensures clinicians use model types suited to critical clinical tasks, preventing misuse of general-purpose chatbots in patient care.
Secondly, protecting data must become a built-in feature of all AI systems. Organisations should deploy technology that automatically strips sensitive information, such as patient identities or confidential research, from any input before it passes through AI systems. Treating data security as a standard practice ensures ongoing compliance and privacy protection.
Thirdly, every AI-driven action must be traceable. Setting up systems that log every AI query and response, relating each transaction to a specific user and timestamp, is vital. This auditing capability supports incident reviews, reinforces patient safety, and meets the requirements of new legal standards regarding accountability.