//GEN AI GOVERNANCE
Your Staff Are Using AI. You Probably Don't Know How.
Clinical staff using ChatGPT to draft letters. Analysts using Copilot to summarise patient data. Consultants using AI to generate reports. This is happening across healthcare organisations every day — often without IT oversight, without policy, and without any awareness of what data is leaving the organisation.
Gen AI governance isn't about banning AI tools. It's about ensuring that when your people use them, they do so safely, compliantly, and in ways that don't create patient safety or regulatory risk.
What Happens Without Governance
The risks of ungoverned generative AI in healthcare are specific, real, and increasingly visible to regulators.
Patient Data Enters Third-Party AI Systems
When clinical staff paste patient information into ChatGPT, Copilot, or other AI tools, that data may be used for model training, stored in third-party infrastructure, or accessible to the provider. Under GDPR Article 28, this requires a Data Processing Agreement. Under DSPT, it requires documented controls. Most organisations have neither.
Hallucination in Clinical Contexts
Generative AI hallucinates — it produces confident, plausible, incorrect outputs. In administrative contexts this is an inconvenience. In clinical contexts, it's a patient safety risk. Without governance, there's no systematic way to ensure AI-generated clinical content is reviewed before use. DCB0129 requires this risk to be formally assessed.
EU AI Act GPAI Obligations (Article 53)
The EU AI Act's General Purpose AI (GPAI) provisions (Article 53) impose obligations on organisations deploying GPAI models in high-risk contexts. Healthcare is explicitly high-risk under Annex III. Organisations using GPT-4, Claude, or Gemini in clinical workflows may already be in scope — and the August 2026 deadline is approaching.
Shadow AI and Ungoverned Adoption
Staff find ways to use tools that make their work easier. Without a governance framework, AI adoption goes underground — ungoverned, unmonitored, and invisible to your security and compliance teams. Shadow AI creates data governance gaps that are extremely difficult to remediate after the fact.
CONTENTS
AI Acceptable Use Policy
A meaningful AI acceptable use policy goes beyond "don't put patient data in ChatGPT." It defines which tools are approved for which use cases, what data classifications can be used with which tools, who can approve exceptions, and how incidents are reported. We help organisations build policies that are practical enough for staff to actually follow — and specific enough to provide real protection.
AI Tool Assessment & Approval
Before any AI tool is used with sensitive data, it should be assessed: What data does it process? Where is it stored? Is a DPA in place? What are the terms of service? Does it train on your data? We build structured assessment processes so new AI tools go through the right checks before reaching clinical staff — not after a data incident.
Runtime Policy Enforcement
Policy documents don't stop data from leaving the organisation. Runtime controls do. We implement technical governance layers — including the open-source Raigo standard — that enforce your AI policies at the point of use. Every AI interaction is evaluated against your rules before it executes. Violations are logged, blocked, or flagged for human review.
Staff Awareness & Training
The most sophisticated technical controls fail if staff don't understand the risks. We design healthcare-specific AI awareness programmes that explain the real risks in terms clinical and operational staff understand — not abstract compliance language. Training covers: what not to put in AI tools, how to spot hallucinated content, and how to report concerns.
Why Choose Our Approach?
PRACTICAL GOVERNANCE
We build governance that staff actually follow — not a policy document that sits in a drawer. Proportionate, clear, and operationally realistic.
RUNTIME CONTROLS
Technical governance enforced at the point of use via the open-source Raigo standard. Policies that work even when staff don't remember them.
REGULATORY MAPPED
Everything maps to DSPT, DTAC, GDPR, and EU AI Act obligations. Evidence your compliance team and commissioners can rely on.
OPEN SOURCE STANDARD
Built on Raigo — our open-source AI governance standard. Transparent, auditable, and freely available. No vendor lock-in.
Frequently Asked Questions
Almost certainly not in sufficient detail. Most existing data protection policies were written before generative AI existed and don't address the specific risks — third-party model training, hallucination, GPAI obligations, or the difference between using an AI tool as a data processor versus a controller. A gap analysis is the right starting point.
Yes, in many cases. EU AI Act Article 50 requires transparency when AI systems interact with people. GDPR Article 13/14 requires disclosure of automated processing. MHRA guidance on AI as a Medical Device includes labelling requirements. The specific obligations depend on how and where AI is used — we can help you map them for your specific deployment.
Raigo is an open-source AI governance standard developed by Periculo. It provides a machine-readable policy format that can be enforced across AI tools in real time — blocking or flagging interactions that violate your policies. It's not a product we sell; it's an open standard you can adopt and adapt. We use it as the technical foundation for our governance implementations.
A 30-minute scoping call is the right starting point. We'll ask about your current AI tool landscape, your existing policies, your regulatory context, and your biggest concerns. From there we can recommend whether you need a gap analysis, a full governance framework implementation, or just specific support in one area. There's no obligation and no pitch — just a conversation about your situation.
Latest Insights
Project Glasswing and Claude Mythos: What AI-...
Anthropic has just announced Project Glasswing, and if you work in cybersecurity, healthcare IT, or digital health, this...
NHS Issues Critical Fortinet Cyber Alert - Ha...
NHS England Issues High-Severity Alert as Zero-Day Exploitation Confirmed NHS England has issued a high-severity cyber a...
The Hidden Threat — Securing the Aerospace Su...
A spacecraft launched with a compromised component cannot be recalled. The aerospace supply chain spans hundreds of orga...
Zero Trust Architecture for Space Systems: Fr...
Zero Trust is not a product; it is a security philosophy: never trust, always verify. In traditional IT, Zero Trust repl...
Anatomy of a Satellite Hack — Deconstructing ...
On 24 February 2022, at the exact moment Russian forces crossed into Ukraine, a cyberattack took down tens of thousands ...
Claude Code Source Code Leak
Claude Code Source Code Leak Was Not a Targeted Cyberattack On the 31 March 2026, Anthropic, maker of the Claude AI, acc...
From Ground to Orbit: The Threat of Rogue Gro...
Every spacecraft communicates with the ground via radio frequency links, TT&C (Telemetry, Tracking, and Command) upl...
Why Space is the Ultimate Cyber-Physical Atta...
The Space ISAC reported a 118% surge in space-related cyber incidents in 2025. Space is no longer a benign environment; ...