//AI SECURITY & GOVERNANCE
Every Healthcare Organisation Is Adopting AI. The Question Is Whether They Do It Safely.
AI is not optional. Clinical decision support, administrative automation, patient-facing tools — every healthcare organisation that wants to remain competitive is deploying AI. The question isn't whether to adopt it. It's whether you do it in a way that's secure, governed, and trusted by patients, commissioners, and regulators.
Periculo helps healthcare organisations, NHS suppliers, and digital health companies adopt AI the right way — with the assurance, governance, and security evidence they need to do it confidently.
ISO 27001 Annex A Controls
Comprehensive coverage of all 114 security controls across 14 categories
Third-party assessment by specialists. Not a self-assessment checklist — an expert evaluation that commissioners, investors, and regulators can trust.
Our framework incorporates DCB0129, DCB0160, DTAC, MHRA AIaMD, and NHS procurement requirements. Built for healthcare — not adapted from generic frameworks.
Our assessments align with ISO 42001 AI Management Systems standard — providing a structured path toward certification for organisations that need it.
The Managed Assurance Programme keeps your certification current as your AI evolves, regulations change, and new threats emerge. From £12,000 per year.
The AI Regulatory Landscape in Healthcare
Three converging regulatory frameworks are reshaping AI deployment across healthcare and life sciences. Understanding how they interact is critical for any organisation developing or deploying AI in clinical environments.
EU AI Act — August 2027 Deadline
High-risk AI systems used in clinical decision-making must achieve full EU AI Act compliance by August 2027. When combined with EU MDR obligations for AI-enabled medical devices, this creates a dual compliance challenge that requires coordinated technical documentation and conformity assessment. MDCG 2025-6 clarifies the interplay but satisfying EU MDR does not equal EU AI Act compliance.
FDA AI/ML SaMD — Total Product Lifecycle
The FDA's framework for AI and ML-based Software as a Medical Device establishes cybersecurity and governance requirements throughout the full product lifecycle. Premarket submissions require a cybersecurity management plan, threat model, and security architecture documentation. Post-market requirements include ongoing surveillance and change management for adaptive AI systems.
MHRA & NHS — UK Market Requirements
The MHRA is developing a dedicated AI as a Medical Device framework for the UK market. NHS procurement requirements now explicitly gate on AI governance documentation — 'governed AI' is increasingly a deciding factor in NHS supplier selection. Digital health companies selling into UK health systems need governance frameworks that satisfy both MHRA and NHS procurement expectations.
GxP & ICH — Pharma AI Governance
Pharmaceutical organisations deploying AI in drug discovery, clinical trials, and pharmacovigilance must satisfy GxP validation requirements, ICH E6(R3) obligations, and the EU AI Act's high-risk classification for clinical trial AI. These frameworks were not designed for adaptive AI systems and require specialist interpretation to implement proportionately.
CONTENTS
EU AI Act + EU MDR Compliance
We support medtech companies through gap assessment against both EU MDR and EU AI Act simultaneously, technical file documentation, conformity assessment preparation, and notified body engagement. We map the MDCG 2025-6 interplay specifically to your device and AI system.
MDR compliance does not equal EU AI Act compliance. The evidence requirements are different and must be addressed in parallel — starting now if you have a 2027 deadline in scope.
We produce the AI-specific technical documentation required under Annex IV of the EU AI Act and map it against your existing MDR technical file to identify gaps.
FDA Cybersecurity for AI/ML-Based SaMD
We support manufacturers preparing 510(k), De Novo, and PMA submissions with AI-specific cybersecurity documentation: cybersecurity management plan, threat model, SBOM, and security architecture documentation aligned to FDA's 2025 final guidance.
For adaptive AI/ML systems, we design change management and post-market monitoring programmes that satisfy FDA's Total Product Lifecycle approach — addressing the unique challenge of AI systems that update after deployment.
We provide cybersecurity documentation that FDA reviewers expect, reducing premarket submission cycles and post-market deficiency letters.
AI Governance for Pharma and Life Sciences
We build AI governance frameworks for pharma organisations that align to existing quality management systems — covering AI inventory, risk classification, validation documentation for GxP environments, and audit trail requirements.
We interpret ICH E6(R3) and EU AI Act requirements for clinical trial AI specifically — helping organisations understand what compliance looks like for AI tools used in patient selection, trial management, and pharmacovigilance.
Our frameworks are proportionate to risk and designed to integrate with existing QMS processes rather than requiring re-architecture.
Enterprise AI Governance for Health Systems
We provide enterprise AI assurance programmes for health systems deploying AI at scale — giving boards, commissioners, and risk committees the independent assurance they require. Continuous oversight, regular framework reviews, and incident response readiness.
We conduct independent AI security due diligence for health systems procuring AI from third-party vendors — assessing security posture, regulatory readiness, and governance maturity before contracts are signed.
Our assurance documentation is designed to satisfy NHS England AI Framework requirements, CQC inspection expectations, and the evidence demands of NHS commissioners.
AI Security Assessment & CREST Penetration Testing
We provide CREST-accredited AI security assessments covering threat vectors that standard penetration testing misses: prompt injection, adversarial inputs, model supply chain attacks, and AI agent attack surfaces in clinical environments.
Our assessments produce cybersecurity documentation in the format required for FDA premarket submissions, MHRA technical files, and notified body review under EU MDR — not just a penetration test report.
For AI agents operating in clinical or operational environments, we assess audit trail completeness, access controls, human oversight implementation, and kill switch design.
Frequently Asked Questions
High-risk AI systems already on the market must achieve full EU AI Act compliance by August 2, 2027. New high-risk AI systems must comply from August 2026. Conformity assessment under both EU MDR and EU AI Act can take 18-24 months — organisations that have not started are already behind schedule.
No. EU MDR and the EU AI Act are separate regulatory frameworks with different evidence requirements. MDCG 2025-6 clarifies the interplay for AI-enabled medical devices, but satisfying EU MDR does not automatically satisfy EU AI Act obligations. A dedicated gap assessment against both frameworks is required.
FDA requires cybersecurity documentation throughout the total product lifecycle. Premarket submissions require a cybersecurity management plan, threat model, SBOM, security architecture documentation, and security test evidence. Post-market requirements include ongoing monitoring, a change management protocol for adaptive AI models, and a vulnerability management process.
AI systems used in clinical trial management, patient selection, or clinical decision-making are generally classified as high-risk under the EU AI Act. They are subject to conformity assessment, technical documentation, transparency requirements, and human oversight obligations — in addition to existing GxP and ICH requirements.
AI systems introduce threat vectors that standard penetration testing does not address: prompt injection, adversarial inputs, model supply chain attacks, and autonomous AI agent behaviour. In healthcare, where AI may influence clinical decisions or access patient data, these risks require specialist assessment frameworks and regulatory documentation aligned to FDA, MHRA, and EU MDR requirements.
Periculo specialises in cybersecurity and AI governance for healthcare and life sciences. Our team has direct experience with FDA premarket submissions, EU MDR notified body processes, MHRA engagement, and AI security assessment in clinical environments. We understand both the technical risks and the regulatory context — most cybersecurity firms understand one but not both.
Latest Insights
ISO 42001: The AI Management System Standard ...
ISO 42001 is showing up in procurement requirements. Enterprise customers are asking for it. NHS supply chain assessment...
Threat Report 173
This week’s report highlights five developments with direct implications for digital health and defence organisations: -...
AI Security Threat Series: Model theft
Cloning a proprietary AI through its own front door Building a world-class AI model takes months of work, millions in co...
Weekly Round Up Issue 16
The regulatory direction of travel got louder this week. The NCSC pulled back the curtain on 18 months of coordinated wo...
UK Biobank Data Listed for Sale
The UK government has issued a formal statement through the National Data Guardian after reports emerged that data from ...
AI Security Threat Series: Membership inferen...
Proving your data was used to train an AI — without ever seeing it You do not need to extract someone's data from a mode...
Building Resilient AI Agents: Defending Again...
As AI agents become increasingly embedded within enterprise workflows, prompt injection attacks have emerged as a critic...
Threat Advisory: Weaponisation of Anthropic's...
Introduction: The Emergence of AI-Powered Cyber Threats In early 2026, a sophisticated cyber intrusion targeting the Mex...