Skip to content
All posts

NHS AI Readiness and Cybersecurity Foundations: Why Security Matters Before Innovation

The NHS is racing to adopt Artificial Intelligence. From faster diagnostics to smarter admin systems, AI promises to transform how care is delivered. But there's a problem: many NHS organisations are trying to build AI capabilities on digital foundations that aren't secure enough to support them.

The government wants the NHS to become "the most AI-enabled care system in the world" and has committed £10 billion to make it happen. But this ambition runs into a difficult reality. The NHS is rolling out AI tools while still using safety standards written in 2013—long before anyone imagined the AI systems we have today.

In healthcare, cybersecurity isn't just an IT problem. It's a patient safety issue.

What Happens When Security Fails

The NHS has already seen what poor cybersecurity costs. In 2017, the WannaCry ransomware attack hit over 80 NHS trusts, cancelled around 19,000 appointments, and cost approximately £92 million. Five hospitals had to turn away emergency patients. The NHS wasn't even the target—it was just running old, unpatched software that made it vulnerable.

More recently, in June 2024, a ransomware attack on Synnovis (a pathology services provider for London hospitals) caused major disruption. Over 10,000 outpatient appointments were cancelled. More than 1,700 procedures were postponed. Patient names and NHS numbers were leaked online. It took four months for services to return to normal.

Most seriously, delays caused by the attack have now been linked to patient deaths.

These aren't one-off events. Between 2019 and mid-2024, there were 215 ransomware attacks on UK healthcare organisations. In late 2024, four Merseyside hospitals were also targeted. The threat isn't going away.

Why AI Makes Security Even More Important

Every new AI tool creates new entry points for attackers. But AI also brings risks that traditional cybersecurity wasn't designed to handle.

Data Poisoning: Attackers can secretly corrupt the data used to train AI systems. This makes the AI learn the wrong things—potentially recommending incorrect treatments or missing serious conditions. Research shows that tampering with just a tiny fraction of training data (0.001%) can reduce an AI's accuracy by up to 30%.

Model Manipulation: Unlike traditional software that always behaves the same way, AI systems can be subtly influenced to change their outputs. Attackers can tamper with models to alter predictions or extract sensitive information.

Performance Drift: AI systems can become less accurate over time as real-world data differs from their training data. Without proper monitoring, a tool that worked well during testing might start making mistakes months later.

These aren't just theoretical risks. NHS England has already had to warn trusts about non-compliant AI transcription tools, and officials have ordered organisations to stop using AI systems that don't meet standards.

The Legacy IT Problem

The NHS can't deploy AI securely if its basic IT infrastructure isn't up to the job. And in many places, it isn't.

Estimates suggest that between 10% and 50% of NHS technology systems need modernising. Many trusts still use old PCs running outdated Windows versions. In some hospitals, computers take over 15 minutes just to start up. Wi-Fi coverage is patchy. Previous attempts at digital transformation have repeatedly failed because legacy systems simply couldn't cope.

This creates a vicious circle. Old systems can't be properly secured. They can't support AI. And they leave patient data vulnerable to attack. The WannaCry attack succeeded precisely because so many NHS computers were running unsupported software with known security flaws.

You can't build advanced AI capabilities on foundations that are already crumbling.

Building Secure Foundations for AI

For AI to work safely in the NHS, cybersecurity has to come first. Several key building blocks are being put in place.

The Cyber Assessment Framework (CAF): This is now the main standard for measuring NHS cybersecurity. Developed by the National Cyber Security Centre, it helps organisations identify risks and improve their defences. In September 2024, NHS England aligned its Data Security and Protection Toolkit with the CAF, pushing organisations towards continuous improvement rather than tick-box compliance.

Supply Chain Security: Many AI tools come from third-party suppliers, and attackers often target these companies to reach the NHS. In May 2025, NHS England asked supplier CEOs to sign a cybersecurity charter committing to 24/7 monitoring, multi-factor authentication, and open collaboration during incidents. New legislation will tighten requirements further.

Data Governance: AI needs high-quality, well-organised data—but that data must be stored securely and handled ethically. Approaches like the Federated Data Platform allow organisations to share insights without centralising sensitive information in one vulnerable location.

Human Oversight: AI should support clinical decisions, not replace human judgment. Staff need to review AI outputs, especially for diagnoses and treatment recommendations. This catches errors and maintains trust.

What Needs to Happen

The NHS can realise AI's potential, but only by getting the basics right first.

Fix the infrastructure. The £10 billion committed to digital transformation must prioritise replacing systems that can't be secured. AI won't work reliably on outdated hardware and unsupported software.

Update safety standards. Current standards were written for traditional software that behaves predictably. AI is different. New standards need to account for how AI systems learn, adapt, and can be manipulated.

Train staff. People across the NHS need to understand AI's limitations and how to spot when something isn't working properly. A culture of openness—where concerns can be raised without blame—is essential.

Build security in from the start. Every AI project should include cybersecurity assessment from day one, not as an afterthought once problems emerge.

The Bottom Line

For the NHS, secure AI is the only sustainable AI.

The promise is real: better diagnostics, more efficient services, improved patient outcomes. But that promise depends on getting the foundations right. The Synnovis attack showed that cybersecurity failures can cost lives. As AI becomes more central to clinical decisions, the stakes will only get higher.

Cybersecurity isn't a barrier to innovation—it's what makes innovation possible. By investing in secure foundations now, the NHS can build a digital future that improves patient care rather than putting it at risk.

The question isn't whether to pursue AI. It's whether we're willing to do the groundwork to make it safe.