Guardrails Problem
Why it matters
Without guardrails embedded in the strategic infrastructure, teams and AI agents are free to pursue whatever path appears locally optimal — even when it conflicts with organisational values, risk tolerance, or long-term positioning. The guardrails problem becomes exponentially more dangerous as AI agents take on autonomous execution.
How Stratafy addresses this
Without encoded boundaries, AI agents and teams operate without constraints — optimising locally while violating organisational values, risk tolerance, and strategic priorities. Stratafy embeds foundation-level constraints as queryable data, enforces governance by level, and provides semantic guardrails that evaluate meaning, not just keywords.
Foundation as queryable constraint layer
Values, principles, and risk tolerance are stored as structured, machine-readable records — not wall posters. AI agents query these constraints before acting, and the system flags decisions that conflict with stated values or exceed defined risk boundaries.
Governance enforced by architectural level
Each layer of the strategy architecture carries its own governance model. Foundation changes require board approval annually. Strategy changes require leadership consensus quarterly. Initiatives need budget holder approval monthly. Authority is structural, not discretionary.
Semantic guardrails for AI agents
A meaning-based policy layer evaluates AI outputs against organisational intent — catching a sales agent promising an unplanned feature or a support agent offering refunds that contradict pricing strategy. These violations are semantically misaligned but invisible to keyword filters.
Trust spectrum modes as operational boundaries
Five modes from fully autonomous to human-in-full-control define what AI can and cannot do at each level. Routine metric collection runs autonomously. Strategic pivots require human judgment. Each mode is itself a guardrail — structurally preventing agents from exceeding their authority.
