Skip to content

A Different Kind of Attack Surface

AI systems introduce vulnerabilities that traditional security testing was never designed to find. Here's what makes AI pen testing fundamentally different.

Prompt Injection (OWASP LLM01)

The most prevalent and underestimated vulnerability in AI systems. Malicious instructions embedded in user inputs, documents, or data sources can override your AI's behaviour — causing it to leak data, bypass controls, or take unauthorised actions. In healthcare, these inputs can come from patient records, clinical documents, or external APIs. We test every input pathway.

Adversarial Inputs & Model Manipulation (MITRE ATLAS AML.T0043)

Carefully crafted inputs can cause AI models to produce incorrect outputs — misclassifying medical images, generating dangerous recommendations, or failing to detect critical conditions. For SaMD and clinical AI, this is a patient safety risk. We test model robustness against adversarial examples relevant to your specific use case.

Data Exfiltration via AI (OWASP LLM06)

AI agents with access to patient records, clinical databases, or sensitive operational data can be manipulated into exfiltrating that data — through carefully crafted prompts that cause the agent to include sensitive information in outputs. We test whether your AI can be used as an exfiltration vector, and whether your output filtering catches it.

Supply Chain & Third-Party Risk (OWASP LLM03)

Your AI system depends on LLM providers, embedding models, and third-party tools. We assess the security posture of your AI supply chain — including whether your LLM provider, tracing tools (LangSmith, Portkey), and external connectors introduce risks. MITRE ATLAS documents supply chain compromise (AML.T0010) as a primary attack vector.

CONTENTS

OWASP LLM TOP 10
MITRE ATLAS
NCSC GUIDELINES
RED TEAMING

OWASP LLM Top 10 Testing

We test against all 10 OWASP LLM vulnerabilities with healthcare-specific scenarios. LLM01 (prompt injection), LLM02 (insecure output handling), LLM06 (sensitive information disclosure), and LLM08 (excessive agency) are the highest priority for clinical AI deployments. Every finding is mapped to its OWASP LLM reference for clear, auditable reporting.

MITRE ATLAS Adversarial ML

MITRE ATLAS is the adversarial threat landscape for AI systems — the AI equivalent of the MITRE ATT&CK framework. We use ATLAS technique IDs to structure our testing, ensuring comprehensive coverage of adversarial ML attack patterns including model evasion, data poisoning, model extraction, and supply chain attacks specific to your AI architecture.

NCSC AI Security Principles

The NCSC's guidelines for secure AI system development (co-signed by CISA, NSA, and 16 national cybersecurity agencies) provide a government-backed framework for AI security assessment. We test against all four NCSC principles: secure design, secure development, secure deployment, and secure operation and maintenance — with NHS-specific context throughout.

AI Red Teaming

Beyond structured testing, our AI red team adopts an attacker's mindset — attempting to find novel attack paths specific to your deployment. This includes creative prompt injection scenarios, chained attacks across multiple AI components, and healthcare-specific threat scenarios (malicious patient records, compromised clinical data sources). Findings not covered by existing frameworks are documented as novel vulnerabilities.

Why Choose Our Approach?

AI-SPECIFIC TESTING

We test LLM vulnerabilities that traditional pen testers don't cover — prompt injection, adversarial inputs, model manipulation, and AI supply chain attacks.

OWASP LLM TOP 10

Every finding mapped to OWASP LLM Top 10 and MITRE ATLAS technique IDs. Clear, consistent reporting that your security team and auditors can use.

HEALTHCARE CONTEXT

We test against healthcare-specific threat scenarios — malicious patient records, compromised clinical data sources, and NHS-specific attack patterns.

RETEST INCLUDED

Once you've remediated findings, we retest and provide written confirmation. The evidence your MHRA technical file or DTAC submission needs.

Frequently Asked Questions

Do I need AI pen testing if I already do annual pen tests? minus-icon

Yes. Traditional CREST penetration testing covers your network, applications, and infrastructure — it doesn't test LLM-specific vulnerabilities. Prompt injection, adversarial inputs, and AI supply chain attacks require specialist testing that most pen test firms are not equipped to perform. For healthcare AI, both are required.

Who regulates AI security testing in healthcare? plus-icon
What does a Periculo AI pen test report look like? plus-icon
Can Periculo test our AI before we go live? plus-icon

Latest Insights

Project Glasswing and Claude Mythos: What AI-Powered Vulnerability Scanning Means for the NHS

Project Glasswing and Claude Mythos: What AI-...

Anthropic has just announced Project Glasswing, and if you work in cybersecurity, healthcare IT, or digital health, this...

NHS Issues Critical Fortinet Cyber Alert - Hackers Can Take Control of Networks

NHS Issues Critical Fortinet Cyber Alert - Ha...

NHS England Issues High-Severity Alert as Zero-Day Exploitation Confirmed NHS England has issued a high-severity cyber a...

The Hidden Threat — Securing the Aerospace Supply Chain Against SPARTA IA-0001

The Hidden Threat — Securing the Aerospace Su...

A spacecraft launched with a compromised component cannot be recalled. The aerospace supply chain spans hundreds of orga...

Zero Trust Architecture for Space Systems: From Concept to Mission Reality

Zero Trust Architecture for Space Systems: Fr...

Zero Trust is not a product; it is a security philosophy: never trust, always verify. In traditional IT, Zero Trust repl...

Anatomy of a Satellite Hack — Deconstructing the Viasat Incident Through SPARTA

Anatomy of a Satellite Hack — Deconstructing ...

On 24 February 2022, at the exact moment Russian forces crossed into Ukraine, a cyberattack took down tens of thousands ...

Claude Code Source Code Leak

Claude Code Source Code Leak

Claude Code Source Code Leak Was Not a Targeted Cyberattack On the 31 March 2026, Anthropic, maker of the Claude AI, acc...

From Ground to Orbit: The Threat of Rogue Ground Stations and RF Attacks

From Ground to Orbit: The Threat of Rogue Gro...

Every spacecraft communicates with the ground via radio frequency links, TT&C (Telemetry, Tracking, and Command) upl...

Why Space is the Ultimate Cyber-Physical Attack Surface

Why Space is the Ultimate Cyber-Physical Atta...

The Space ISAC reported a 118% surge in space-related cyber incidents in 2025. Space is no longer a benign environment; ...