Skip to content

What Happens Without Governance

The risks of ungoverned generative AI in healthcare are specific, real, and increasingly visible to regulators.

Patient Data Enters Third-Party AI Systems

When clinical staff paste patient information into ChatGPT, Copilot, or other AI tools, that data may be used for model training, stored in third-party infrastructure, or accessible to the provider. Under GDPR Article 28, this requires a Data Processing Agreement. Under DSPT, it requires documented controls. Most organisations have neither.

Hallucination in Clinical Contexts

Generative AI hallucinates — it produces confident, plausible, incorrect outputs. In administrative contexts this is an inconvenience. In clinical contexts, it's a patient safety risk. Without governance, there's no systematic way to ensure AI-generated clinical content is reviewed before use. DCB0129 requires this risk to be formally assessed.

EU AI Act GPAI Obligations (Article 53)

The EU AI Act's General Purpose AI (GPAI) provisions (Article 53) impose obligations on organisations deploying GPAI models in high-risk contexts. Healthcare is explicitly high-risk under Annex III. Organisations using GPT-4, Claude, or Gemini in clinical workflows may already be in scope — and the August 2026 deadline is approaching.

Shadow AI and Ungoverned Adoption

Staff find ways to use tools that make their work easier. Without a governance framework, AI adoption goes underground — ungoverned, unmonitored, and invisible to your security and compliance teams. Shadow AI creates data governance gaps that are extremely difficult to remediate after the fact.

CONTENTS

AI ACCEPTABLE USE
TOOL ASSESSMENT
RUNTIME CONTROLS
TRAINING

AI Acceptable Use Policy

A meaningful AI acceptable use policy goes beyond "don't put patient data in ChatGPT." It defines which tools are approved for which use cases, what data classifications can be used with which tools, who can approve exceptions, and how incidents are reported. We help organisations build policies that are practical enough for staff to actually follow — and specific enough to provide real protection.

AI Tool Assessment & Approval

Before any AI tool is used with sensitive data, it should be assessed: What data does it process? Where is it stored? Is a DPA in place? What are the terms of service? Does it train on your data? We build structured assessment processes so new AI tools go through the right checks before reaching clinical staff — not after a data incident.

Runtime Policy Enforcement

Policy documents don't stop data from leaving the organisation. Runtime controls do. We implement technical governance layers — including the open-source Raigo standard — that enforce your AI policies at the point of use. Every AI interaction is evaluated against your rules before it executes. Violations are logged, blocked, or flagged for human review.

Staff Awareness & Training

The most sophisticated technical controls fail if staff don't understand the risks. We design healthcare-specific AI awareness programmes that explain the real risks in terms clinical and operational staff understand — not abstract compliance language. Training covers: what not to put in AI tools, how to spot hallucinated content, and how to report concerns.

Why Choose Our Approach?

PRACTICAL GOVERNANCE

We build governance that staff actually follow — not a policy document that sits in a drawer. Proportionate, clear, and operationally realistic.

RUNTIME CONTROLS

Technical governance enforced at the point of use via the open-source Raigo standard. Policies that work even when staff don't remember them.

REGULATORY MAPPED

Everything maps to DSPT, DTAC, GDPR, and EU AI Act obligations. Evidence your compliance team and commissioners can rely on.

OPEN SOURCE STANDARD

Built on Raigo — our open-source AI governance standard. Transparent, auditable, and freely available. No vendor lock-in.

Frequently Asked Questions

We already have a data protection policy. Does that cover AI? minus-icon

Almost certainly not in sufficient detail. Most existing data protection policies were written before generative AI existed and don't address the specific risks — third-party model training, hallucination, GPAI obligations, or the difference between using an AI tool as a data processor versus a controller. A gap analysis is the right starting point.

Do we need to tell patients when AI is used in their care? plus-icon
What is Raigo and how does it help with Gen AI governance? plus-icon
Where do we start? plus-icon

Latest Insights

Project Glasswing and Claude Mythos: What AI-Powered Vulnerability Scanning Means for the NHS

Project Glasswing and Claude Mythos: What AI-...

Anthropic has just announced Project Glasswing, and if you work in cybersecurity, healthcare IT, or digital health, this...

NHS Issues Critical Fortinet Cyber Alert - Hackers Can Take Control of Networks

NHS Issues Critical Fortinet Cyber Alert - Ha...

NHS England Issues High-Severity Alert as Zero-Day Exploitation Confirmed NHS England has issued a high-severity cyber a...

The Hidden Threat — Securing the Aerospace Supply Chain Against SPARTA IA-0001

The Hidden Threat — Securing the Aerospace Su...

A spacecraft launched with a compromised component cannot be recalled. The aerospace supply chain spans hundreds of orga...

Zero Trust Architecture for Space Systems: From Concept to Mission Reality

Zero Trust Architecture for Space Systems: Fr...

Zero Trust is not a product; it is a security philosophy: never trust, always verify. In traditional IT, Zero Trust repl...

Anatomy of a Satellite Hack — Deconstructing the Viasat Incident Through SPARTA

Anatomy of a Satellite Hack — Deconstructing ...

On 24 February 2022, at the exact moment Russian forces crossed into Ukraine, a cyberattack took down tens of thousands ...

Claude Code Source Code Leak

Claude Code Source Code Leak

Claude Code Source Code Leak Was Not a Targeted Cyberattack On the 31 March 2026, Anthropic, maker of the Claude AI, acc...

From Ground to Orbit: The Threat of Rogue Ground Stations and RF Attacks

From Ground to Orbit: The Threat of Rogue Gro...

Every spacecraft communicates with the ground via radio frequency links, TT&C (Telemetry, Tracking, and Command) upl...

Why Space is the Ultimate Cyber-Physical Attack Surface

Why Space is the Ultimate Cyber-Physical Atta...

The Space ISAC reported a 118% surge in space-related cyber incidents in 2025. Space is no longer a benign environment; ...