//AI SECURITY & GOVERNANCE
Navigate the AI Governance Landscape in Healthcare and Life Sciences
EU AI Act. FDA AI/ML SaMD. EU MDR. MHRA. The regulatory requirements for AI in healthcare are complex, overlapping, and moving fast. Periculo provides specialist AI security and governance consulting to help medtech companies, pharma organisations, and health systems deploy AI responsibly — and pass the scrutiny that comes with it.
The AI Regulatory Landscape in Healthcare
Three converging regulatory frameworks are reshaping AI deployment across healthcare and life sciences. Understanding how they interact is critical for any organisation developing or deploying AI in clinical environments.
EU AI Act — August 2027 Deadline
High-risk AI systems used in clinical decision-making must achieve full EU AI Act compliance by August 2027. When combined with EU MDR obligations for AI-enabled medical devices, this creates a dual compliance challenge that requires coordinated technical documentation and conformity assessment. MDCG 2025-6 clarifies the interplay but satisfying EU MDR does not equal EU AI Act compliance.
FDA AI/ML SaMD — Total Product Lifecycle
The FDA's framework for AI and ML-based Software as a Medical Device establishes cybersecurity and governance requirements throughout the full product lifecycle. Premarket submissions require a cybersecurity management plan, threat model, and security architecture documentation. Post-market requirements include ongoing surveillance and change management for adaptive AI systems.
MHRA & NHS — UK Market Requirements
The MHRA is developing a dedicated AI as a Medical Device framework for the UK market. NHS procurement requirements now explicitly gate on AI governance documentation — 'governed AI' is increasingly a deciding factor in NHS supplier selection. Digital health companies selling into UK health systems need governance frameworks that satisfy both MHRA and NHS procurement expectations.
GxP & ICH — Pharma AI Governance
Pharmaceutical organisations deploying AI in drug discovery, clinical trials, and pharmacovigilance must satisfy GxP validation requirements, ICH E6(R3) obligations, and the EU AI Act's high-risk classification for clinical trial AI. These frameworks were not designed for adaptive AI systems and require specialist interpretation to implement proportionately.
CONTENTS
EU AI Act + EU MDR Compliance
We support medtech companies through gap assessment against both EU MDR and EU AI Act simultaneously, technical file documentation, conformity assessment preparation, and notified body engagement. We map the MDCG 2025-6 interplay specifically to your device and AI system.
MDR compliance does not equal EU AI Act compliance. The evidence requirements are different and must be addressed in parallel — starting now if you have a 2027 deadline in scope.
We produce the AI-specific technical documentation required under Annex IV of the EU AI Act and map it against your existing MDR technical file to identify gaps.
FDA Cybersecurity for AI/ML-Based SaMD
We support manufacturers preparing 510(k), De Novo, and PMA submissions with AI-specific cybersecurity documentation: cybersecurity management plan, threat model, SBOM, and security architecture documentation aligned to FDA's 2025 final guidance.
For adaptive AI/ML systems, we design change management and post-market monitoring programmes that satisfy FDA's Total Product Lifecycle approach — addressing the unique challenge of AI systems that update after deployment.
We provide cybersecurity documentation that FDA reviewers expect, reducing premarket submission cycles and post-market deficiency letters.
AI Governance for Pharma and Life Sciences
We build AI governance frameworks for pharma organisations that align to existing quality management systems — covering AI inventory, risk classification, validation documentation for GxP environments, and audit trail requirements.
We interpret ICH E6(R3) and EU AI Act requirements for clinical trial AI specifically — helping organisations understand what compliance looks like for AI tools used in patient selection, trial management, and pharmacovigilance.
Our frameworks are proportionate to risk and designed to integrate with existing QMS processes rather than requiring re-architecture.
Enterprise AI Governance for Health Systems
We provide enterprise AI assurance programmes for health systems deploying AI at scale — giving boards, commissioners, and risk committees the independent assurance they require. Continuous oversight, regular framework reviews, and incident response readiness.
We conduct independent AI security due diligence for health systems procuring AI from third-party vendors — assessing security posture, regulatory readiness, and governance maturity before contracts are signed.
Our assurance documentation is designed to satisfy NHS England AI Framework requirements, CQC inspection expectations, and the evidence demands of NHS commissioners.
AI Security Assessment & CREST Penetration Testing
We provide CREST-accredited AI security assessments covering threat vectors that standard penetration testing misses: prompt injection, adversarial inputs, model supply chain attacks, and AI agent attack surfaces in clinical environments.
Our assessments produce cybersecurity documentation in the format required for FDA premarket submissions, MHRA technical files, and notified body review under EU MDR — not just a penetration test report.
For AI agents operating in clinical or operational environments, we assess audit trail completeness, access controls, human oversight implementation, and kill switch design.
Frequently Asked Questions
High-risk AI systems already on the market must achieve full EU AI Act compliance by August 2, 2027. New high-risk AI systems must comply from August 2026. Conformity assessment under both EU MDR and EU AI Act can take 18-24 months — organisations that have not started are already behind schedule.
No. EU MDR and the EU AI Act are separate regulatory frameworks with different evidence requirements. MDCG 2025-6 clarifies the interplay for AI-enabled medical devices, but satisfying EU MDR does not automatically satisfy EU AI Act obligations. A dedicated gap assessment against both frameworks is required.
FDA requires cybersecurity documentation throughout the total product lifecycle. Premarket submissions require a cybersecurity management plan, threat model, SBOM, security architecture documentation, and security test evidence. Post-market requirements include ongoing monitoring, a change management protocol for adaptive AI models, and a vulnerability management process.
AI systems used in clinical trial management, patient selection, or clinical decision-making are generally classified as high-risk under the EU AI Act. They are subject to conformity assessment, technical documentation, transparency requirements, and human oversight obligations — in addition to existing GxP and ICH requirements.
AI systems introduce threat vectors that standard penetration testing does not address: prompt injection, adversarial inputs, model supply chain attacks, and autonomous AI agent behaviour. In healthcare, where AI may influence clinical decisions or access patient data, these risks require specialist assessment frameworks and regulatory documentation aligned to FDA, MHRA, and EU MDR requirements.
Periculo specialises in cybersecurity and AI governance for healthcare and life sciences. Our team has direct experience with FDA premarket submissions, EU MDR notified body processes, MHRA engagement, and AI security assessment in clinical environments. We understand both the technical risks and the regulatory context — most cybersecurity firms understand one but not both.
Latest Insights
Project Glasswing and Claude Mythos: What AI-...
Anthropic has just announced Project Glasswing, and if you work in cybersecurity, healthcare IT, or digital health, this...
NHS Issues Critical Fortinet Cyber Alert - Ha...
NHS England Issues High-Severity Alert as Zero-Day Exploitation Confirmed NHS England has issued a high-severity cyber a...
The Hidden Threat — Securing the Aerospace Su...
A spacecraft launched with a compromised component cannot be recalled. The aerospace supply chain spans hundreds of orga...
Zero Trust Architecture for Space Systems: Fr...
Zero Trust is not a product; it is a security philosophy: never trust, always verify. In traditional IT, Zero Trust repl...
Anatomy of a Satellite Hack — Deconstructing ...
On 24 February 2022, at the exact moment Russian forces crossed into Ukraine, a cyberattack took down tens of thousands ...
Claude Code Source Code Leak
Claude Code Source Code Leak Was Not a Targeted Cyberattack On the 31 March 2026, Anthropic, maker of the Claude AI, acc...
From Ground to Orbit: The Threat of Rogue Gro...
Every spacecraft communicates with the ground via radio frequency links, TT&C (Telemetry, Tracking, and Command) upl...
Why Space is the Ultimate Cyber-Physical Atta...
The Space ISAC reported a 118% surge in space-related cyber incidents in 2025. Space is no longer a benign environment; ...