AI GovernanceAIGovernanceStrategyMCP··7 min read

Your AI Can Access Everything. Who Decided That?

Claude Cowork just launched with 11 plugins connecting to your CRM, email, code repos, and financial systems. Your employees AI agents can now take autonomous action across your entire stack. Who decided what they are allowed to do?
Leonard Cremer

Leonard Cremer

Founder & CEO, Stratafy

Your AI Can Access Everything. Who Decided That?

Claude Cowork launched its enterprise plugin marketplace last week with 11 plugins. Sales. Finance. Legal. Marketing. HR. Each one bundles access to multiple external systems — CRMs, email, calendars, project management, databases — into a single AI-powered workflow.

Within a few months, most AI-forward companies will have employees whose desktop AI can read and write to file systems, query and update customer records, create tasks and manage projects, send emails and schedule meetings, access financial data, and execute code.

Each of these capabilities arrives through MCP connections — standardized interfaces that give AI agents structured access to external systems. They're individually useful. Collectively, they raise a question that most organizations haven't seriously considered: who decides what each person's AI agent is allowed to do?

The Answer, Right Now, Is Mostly "Nobody"

In most organizations deploying AI agents today, tool access follows the same pattern as SaaS provisioning. IT sets up connections. People get access based on their role or department. Maybe there's an approval workflow. Then the permissions sit there, unchanged, until someone notices a problem.

This worked well enough for traditional software because every action required human initiative. Having Salesforce access didn't mean anything happened automatically. A person had to decide to open the app, navigate to a record, and make a change.

AI agents invert this. They don't wait for initiative — they take it. Give an agent access to the CRM and the email system, and it can decide on its own to contact customers based on pipeline data. Give it access to the code repository and the deployment tools, and it can push changes to production. Each individual permission seems reasonable. The combinations are where things get interesting.

A Scenario That's Already Possible

Consider a marketing team with a fairly standard AI setup. Their Cowork installation has the Marketing plugin enabled, which bundles HubSpot (CRM and email automation), Slack (internal communications), Gmail (external email), and a knowledge base for brand guidelines.

A junior marketer asks their AI agent: "Draft and send follow-up emails to everyone who attended last week's webinar."

The agent has all the permissions it needs. It can query HubSpot for attendee data. It can draft emails using the brand guidelines. It can send those emails through Gmail. Every individual tool access would pass an IT audit.

But the webinar was for a product that's currently under a pricing review. The leadership team decided last week to pause all outbound communication about this product until the new pricing is finalized. That decision lives in a strategy document, a Slack thread between executives, and the heads of a few people who were in the meeting.

The AI agent doesn't know any of this. It has tool access. It doesn't have context.

Now multiply this by every employee with an AI agent, across every tool connection, every day. The question isn't whether something will go wrong — it's whether you'll know when it does.

The Visibility Gap Is the Scarier Problem

Compound action risk gets the headlines, but the visibility gap is more insidious. Right now, in most organizations, there's no way for leadership to answer basic questions about AI agent activity:

QuestionCan You Answer It Today?
What tools are our AI agents actually accessing?Usually no
Which employees' agents are taking autonomous actions?Rarely tracked
Are those actions aligned with current quarterly priorities?No mechanism exists
When an agent accesses customer data, what does it do with it?Opaque
Who authorized the tool combinations agents are using?Nobody explicitly
What happens when strategic context changes mid-quarter?Permissions stay static
Can you produce an audit trail linking agent actions to strategic intent?Not possible today

This is the strategy-execution gap wearing new clothes. For years, organizations have struggled with the disconnect between what leadership decides and what teams actually do. AI agents add a new layer — now there's a disconnect between what teams intend and what their AI tools actually execute.

The irony is that AI agents are supposed to close execution gaps. They're faster, more consistent, less prone to forgetting. But without governance, they execute efficiently on actions that may or may not align with organizational intent. Speed without direction isn't velocity — it's chaos with better tooling.

Why the IT Playbook Doesn't Work Here

IT departments will reach for familiar tools: permission matrices, role-based access, quarterly reviews, approval workflows. This is understandable and insufficient.

The core issue is that AI tool governance decisions are not IT decisions. They're strategic decisions.

Whether a person's AI agent should have access to the CRM isn't a question about their job title. It's a question about what they're working on right now, what the organization's current priorities are, what risks are active, and what decisions are pending. A tool that's appropriate during a growth sprint might be inappropriate during a compliance audit. A workflow that makes sense for a strategic account executive is dangerous for a junior SDR during an outbound pause.

Static permissions can't capture this. They weren't designed to. They were designed for a world where tools are passive and humans are the active agents. In a world where AI agents take autonomous action, permissions need to be dynamic, contextual, and strategically informed.

What "Strategically Informed" Actually Means

Think about how a good manager handles tool access today — not through IT tickets, but through judgment calls based on context.

"Yes, give the new hire access to the analytics dashboard, but only read access until they've been through onboarding."

"The sales team should be using the new CRM workflow, but hold off on the enterprise accounts until we've aligned on the new pricing."

"During the audit period, nobody should be sending automated emails to customers. Period."

Each of these decisions requires knowing what's happening in the organization: what initiatives are active, what risks are flagged, what decisions are pending, who has authority over what. It requires strategic context — the same context that's typically locked in executives' heads, scattered across documents, or buried in Slack threads.

For AI agent governance to work at scale, that strategic context needs to be structured, queryable, and connected to the systems that enforce permissions. The governance decision ("this person's agent should have read-only CRM access during the audit") needs to flow automatically to the enforcement layer (Cloudflare, Azure, whatever manages the actual connections).

Right now, there's infrastructure to enforce whatever rules you set. Cloudflare's Zero Trust platform has a full MCP governance API. Azure API Management can route and restrict MCP traffic. AWS has registry-based allowlists. The enforcement machinery exists.

What doesn't exist is the intelligence layer that determines what rules should be set in the first place — the system that connects "we're in a compliance review" to "restrict outbound email tool access for all marketing agents."

The Window Is Now

This might feel like a problem for 2027. It's not. Cowork launched last week. Copilot has been in enterprises for over a year. Gemini workspace integrations are expanding. The adoption curve for AI desktop agents is steep, and governance infrastructure lags adoption by 12-18 months in every technology cycle.

The organizations that figure out AI tool governance early — that build the connection between strategic intent and AI agent capabilities — will have a structural advantage. Their AI agents will amplify strategy rather than operate in a vacuum. Their audit trails will show not just what happened, but why it was authorized. Their teams will move faster because they'll trust that their AI tools are operating within appropriate boundaries.

The organizations that wait will accumulate governance debt — months or years of AI agent actions taken without strategic alignment, without audit trails, without the ability to answer the basic question: who decided that AI agent should be able to do that?

The answer shouldn't be "nobody."


Continue Reading

© 2026 Stratafy. All rights reserved.