AI GovernanceAIGovernanceMCPArchitecture··8 min read

The Four Layers of AI Governance (And the One Nobody Is Building)

Cloudflare, Azure, and AWS have all shipped MCP governance APIs in the past six months. The enforcement infrastructure is ready. But there is a layer above enforcement that nobody is building — and it is the one that matters most.
Leonard Cremer

Leonard Cremer

Founder & CEO, Stratafy

The Four Layers of AI Governance (And the One Nobody Is Building)

In the past six months, something notable has happened in enterprise AI infrastructure. Cloudflare shipped a full MCP governance API inside their Zero Trust platform. Azure API Management added MCP server routing with per-server authentication and rate limits. AWS Q Developer introduced registry-based governance with organization-level allowlists. Microsoft's own IT team published their internal MCP governance playbook. MetaMCP launched as an open-source orchestration layer with multi-tenant access control.

The message is clear: the enterprise infrastructure vendors have decided that AI tool governance is a real problem, and they're building the plumbing to address it.

But if you look at what's actually being built, and what's conspicuously absent, a pattern emerges. Everyone is building enforcement. Nobody is building the intelligence that tells enforcement what to enforce.

The Stack That's Emerging

AI tool governance is settling into four distinct layers, each with different owners and different capabilities.

Layer 1: The Protocol

At the base sits MCP itself — the Model Context Protocol that standardizes how AI agents connect to external tools. The June 2025 specification update was significant: it classified MCP servers as OAuth Resource Servers and mandated Resource Indicators (RFC 8707) for token scoping. This gives the protocol a basic security vocabulary.

But the spec is deliberately thin on governance. It explicitly states that security principles cannot be enforced at the protocol level — implementors must build their own consent and authorization flows. There are active proposals for "progressive scoping" and "secure elicitation," but these remain proposals. The protocol defines how connections work, not who should be allowed to make them or under what conditions.

This is actually the right design decision. Protocols should be simple and composable. Governance belongs above the transport layer.

Layer 2: Authentication & Identity

One layer up, identity providers handle the question "who is this person?" Okta, Azure AD, Auth0, and WorkOS verify identity, manage group membership, and provide the authentication tokens that other systems consume.

Identity is a prerequisite for governance but not a substitute for it. Knowing that someone is a marketing manager in the EMEA team tells you something about what they might need access to, but it doesn't tell you what they should have access to right now, given what's happening in the organization. Identity is static context. Governance decisions require dynamic context.

Layer 3: Enforcement

This is where the action has been in 2025 and 2026. The enforcement layer provisions access, applies rules, and maintains audit trails. It's the mechanical execution of governance decisions.

PlatformApproachKey Capabilities
Cloudflare Zero TrustPortal modelFull CRUD for MCP servers, curated tool bundles per role, identity-based access policies, per-invocation logging
Azure API ManagementGateway approachCentralized policy enforcement, credential injection, transaction-level JWT authorization
AWS Q DeveloperRegistry modelJSON allowlists hosted over HTTPS, organization-level governance with per-account overrides
MetaMCPOpen-source orchestrationMulti-tenant access control, per-namespace tool filtering, composable MCP server endpoints
Microsoft InternalPlaybook modelSingle allowlist catalog with documented owners, scopes, and data boundaries

All of these are real, shipping, well-documented infrastructure. If you know what rules you want, there are multiple production-grade platforms to enforce them.

Layer 4: Strategic Intelligence

Here's the gap.

Every enforcement platform takes rules as input. Someone or something needs to determine what those rules should be. Today, that "someone" is an IT administrator making static policy choices based on job titles and department membership. The rules get set, then they sit until the next quarterly review or until something goes wrong.

This creates a structural disconnect. The decisions about what AI agents should be able to do are strategic decisions — they depend on what the organization is trying to achieve, what initiatives are active, what risks are flagged, who has been assigned to what, and what boundaries are appropriate given current context. But the systems making these decisions (human judgment, scattered across documents and Slack threads) are completely disconnected from the systems enforcing them (Cloudflare, Azure, AWS).

Layer 4 is the system that bridges this gap. It takes organizational context — strategies, initiatives, risk posture, decision authority — and translates it into governance policies that Layer 3 platforms can execute.

Why Layer 4 Can't Be an Afterthought

It's tempting to think enforcement is the hard part and policy is just configuration. The opposite is true. Enforcement infrastructure is well-understood — it's access control, a solved problem with decades of engineering behind it. Policy generation is the genuinely hard problem because it requires understanding organizational context in a structured way.

Consider what a Layer 4 system needs to do:

Initiative-aware provisioning. When a new product launch initiative activates with three team members assigned, their AI agents should automatically get access to the tools relevant to that initiative — project management, design tools, code repositories, communications — and lose access to tools that aren't relevant, like financial systems or CRM write access. When the initiative completes, the access should automatically contract.

Risk-responsive constraints. When a risk is flagged — a compliance review, a security incident, a regulatory inquiry — the governance posture should tighten automatically. Tool access that was appropriate yesterday might need to be restricted today, not because the person's role changed, but because the organizational context changed.

Decision-gated access. Some tool connections should require an active decision record before an AI agent can use them. Deploying to production, modifying customer contracts, sending mass communications — these are actions where human judgment should gate AI execution, and the decision trail should be preserved.

Workflow-level evaluation. As plugins bundle multiple tool connections into business workflows, governance needs to evaluate at the workflow level. A Sales plugin combining CRM access, email drafting, and calendar scheduling is a business workflow — and whether it's appropriate depends on strategic context that no enforcement platform has access to.

Auditable rationale. When a regulator or an internal auditor asks "why did this AI agent have access to financial data?", the answer needs to be more than "because they're in the finance department." It needs to be "because they were assigned to the Q3 budget initiative, which required financial modeling capability, and the access was authorized by this decision record, scoped to read-only, and expired when the initiative completed."

None of this is possible without a structured representation of organizational intent that's connected to enforcement infrastructure. That's what Layer 4 is.

The Analogy That Clarifies

Think about how Stripe relates to banks. Banks handle the mechanics of moving money — they're the enforcement layer for financial transactions. But Stripe sits above banks, providing the intelligence layer that determines how transactions should be processed, what rules apply, and how the payment flow should work for a given business context.

Stripe doesn't compete with banks. It makes banks useful for a new class of applications. Banks need Stripe (or something like Stripe) to serve internet commerce, because the banks' native capabilities — while robust — aren't designed for the decision-making layer that modern applications require.

The same structural relationship is emerging in AI governance. Cloudflare, Azure, and AWS are building the banks — robust enforcement infrastructure for AI tool access. What's missing is the Stripe — the intelligence layer that makes enforcement infrastructure useful for organizations that need governance decisions to be contextual, dynamic, and strategically aligned.

Why This Isn't an IT Vendor Problem

You might expect ServiceNow, Okta, or CrowdStrike to build Layer 4. They won't — at least not effectively. Here's why.

Layer 4 requires organizational strategy data. It needs to know what the company is trying to achieve, what initiatives are active, who's assigned to what, what risks are flagged, and what decisions are pending. IT governance vendors don't have this data. They have identity data, device data, network data, and compliance data. But they don't have strategic context.

Strategy and planning tools (Notion, Lattice, various OKR platforms) have some of this data, but they don't have AI governance infrastructure. They don't understand MCP, they don't integrate with enforcement platforms, and they weren't designed to generate machine-readable governance policies.

The system that generates Layer 4 intelligence needs to sit at the intersection of strategic architecture and AI agent infrastructure. It needs deep understanding of both organizational intent and the mechanics of AI tool governance. That intersection is currently unoccupied.

What to Watch For

If you're evaluating your organization's AI governance readiness, here's what the next 12-18 months will look like:

Phase 1 (now): Enforcement infrastructure matures. Cloudflare, Azure, and AWS will continue shipping governance features. Your IT team should be evaluating these platforms and establishing basic MCP governance — allowlists, portals, access policies. This is table stakes.

Phase 2 (mid-2026): The limits of static governance become visible. As AI agent adoption grows, the overhead of manually maintaining governance rules will become unsustainable. Organizations will discover that quarterly permission reviews don't work when strategic context changes weekly. This is when demand for Layer 4 intelligence crystallizes.

Phase 3 (late 2026 - 2027): Dynamic governance becomes a requirement. Regulated industries will be first — financial services, healthcare, and government will need auditable, context-aware AI governance for compliance. But any organization with meaningful AI agent deployment will feel the pressure.

The organizations that move early on enforcement (now) and start thinking about strategic governance (soon) will have a significant advantage. The ones that treat AI tool governance as a future problem will accumulate technical and compliance debt that becomes progressively harder to unwind.

The enforcement layer is ready. The intelligence layer is the opportunity.


Continue Reading

© 2026 Stratafy. All rights reserved.