I have spent the better part of a decade in the trenches of high-stakes compliance.
I have lived through the clinical safety audits of the NHS (DCB0129/0160), the rigid data sovereignty requirements of the country level health care solutions, and the complex web of HIPAA in the US. If there is one thing I have learned across all these regulated environments, it is that people love to confuse the plumbing with the water.
Right now, we are seeing a massive surge in "AI Governance" solutions. Boards are panicking, CISOs are being asked for "AI posture reports," and compliance officers are trying to map the EU AI Act to their existing spreadsheets. In response, the market has provided some truly excellent infrastructure. Microsoft recently released their Agent Governance Toolkit—a masterpiece of engineering with a stateless policy engine, identity layers, and automated evidence collection.
But here is the blunt truth: Microsoft has given you the best plumbing in the world. They have not given you the water.
The infrastructure—the policy engines, the audit logs, the identity layers—is not governance. It is the enforcement mechanism for governance. Governance itself is the human accountability framework, the hard decisions about risk appetite, and the specific regulatory mappings that tell that infrastructure what to do.
| Feature | AI Governance Infrastructure | AI Governance |
|---|---|---|
| Primary Goal | Technical enforcement and visibility. | Risk ownership and legal compliance. |
| Ownership | Engineers, CISOs, and DevOps. | Board members, GRC, and Legal. |
| Output | Audit logs, blocked API calls, identity tokens. | Policy documents, risk registers, accountability chains. |
| Example | Microsoft Agent OS intercepting a tool call. | A board-approved policy defining "acceptable clinical risk." |
| Failure Mode | The system crashes or has high latency. | A regulator finds the organization "negligent" despite the tools working perfectly. |
| Decision Maker | The Software Engineer writing the rule. | The Accountable Executive signing the risk. |
When I was working on DCB0129 clinical safety standards for NHS digital health deployments, the "infrastructure" was the software itself. But the "governance" was the Clinical Risk Management Plan (CRMP). The CRMP didn't care about the code; it cared about the outcome. It asked: "If this software suggests the wrong dosage of insulin, what is the human process to catch that error before it reaches the patient?"
Infrastructure provides: The ability to tag logs with "HIPAA" or "GDPR" labels. A dashboard that shows "Compliance: 85%."
Governance requires: Knowing exactly which sections of the EU AI Act apply to your specific use case. Are you "High Risk" under Annex III? Are you a "Provider" or a "Deployer"?
Infrastructure provides: Default policy templates. "Block all PII," "Restrict to approved domains." These are often set by the DevOps team during deployment.
Governance requires: A human being—someone with a title like "Chief Medical Officer," "Head of Risk," or "General Counsel"—signing off on those thresholds. If an AI agent is making decisions about resource allocation in a hospital, an engineer shouldn't be the one deciding the "safety threshold."
Infrastructure provides: A "403 Forbidden" response, an "Agent Blocked" notification in a SOC. Binary: allowed or denied.
Governance requires: A defined workflow for appeal, override, and continuous improvement. In the NHS, if your infrastructure is so rigid that it prevents a doctor from accessing life-saving information, the infrastructure itself has become a clinical safety risk.
Governance asks: "Who has the authority to 'break the glass'? How do we review that override? And how do we update our policies so we don't have to break the glass again tomorrow?"
Infrastructure provides: 50-page technical reports, heat maps of "threat vectors," real-time telemetry.
Governance requires: A clear, concise statement of risk and mitigation. A board-ready governance posture sounds like this: "We have three autonomous agents operating in our customer service department. They are governed by a policy that prevents them from offering discounts above 10% without human approval. Our maximum financial exposure per incident is £5,000."
Infrastructure provides: The audit trail. It shows that on Tuesday at 14:02, the agent took an action.
Governance requires: The "why" and the "due diligence." If an AI agent causes a data breach, the regulator isn't just going to look at the logs. They are going to ask: "What was your risk assessment process? Who signed off on the residual risk?"
Infrastructure gives you the evidence of what happened, but governance provides the justification for why you allowed it to happen in the first place.
At Periculo, we started by building infrastructure. We built the fast policy engines, the audit logs, and the integration layers. We thought that was what the market wanted.
But when offering this to our customers and partners —we realised the infrastructure was only 20% of the problem. Our clients were saying, "The tool is great, but what should the policy actually be? Who should sign it? And what does the regulator want to see?"
That's when we realised that "AI Governance" is a knowledge problem solved by a platform, not a platform that replaces knowledge.
We love the Microsoft Agent Governance Toolkit. But we don't stop at the infrastructure.
When we work with clients at Periculo, we start with the governance:
Only after we have those answers do we turn on the policy engines.
If you have spent the last six months buying tools but you still can't answer "Who is legally responsible if this agent hallucinates?", then you have a plumbing problem. You have the pipes, but no water.
Stop focusing on the "how" of blocking agents and start focusing on the "why." That is the difference between a technical hurdle and a board-level disaster.
Contact Periculo to bridge the gap between your AI infrastructure and actual governance.