Part 6 of 7

Hot Strategic Context

Strategy must be available wherever decisions happen — in under 200 milliseconds

The feedback loop between strategy and execution only works if it's fast. People make decisions without consulting strategy not because they don't care, but because strategy takes too long to load.

If strategic context isn't instant, it won't govern decisions. Strategy that doesn't govern decisions isn't strategy — it's documentation.

The Latency Problem

Assembling strategic context requires multiple database queries. Sequential execution means 3-5 seconds. Perfectly acceptable for a dashboard load. Completely unacceptable for conversational AI.

Strategic context is also role-dependent — a CEO needs a different view than a product lead, who needs a different view than a team member. This multiplies the retrieval complexity for every interaction.

How Hot Strategic Context Works

Pre-Computed Context Bundles

Curated package per role and workspace. Includes relevant strategies, initiatives, risks, assumptions, metrics, and decisions. Sized for AI model context windows.

Incremental Cache Updates

Only affected cache entries invalidated on change. Event-driven invalidation ensures freshness without full recomputation.

Session Pre-Warming

Pre-loads likely context bundle on session start based on role and recent activity. By the time a user asks a question, the answer is already staged.

Token Budget Management

Most strategically relevant entities get full representation, supporting context gets progressively summarised. Maximum strategic awareness within model limits.

What This Enables

Conversational Strategic AI

Under 200ms context retrieval means AI can reason about strategy in real time. No loading spinners, no "let me look that up" delays.

Strategy in the Flow of Work

When strategic context is instant, the threshold for consulting strategy drops to zero. People check strategy because it costs nothing.

Efficient AI Interactions

Fewer tokens, full awareness. Pre-computed bundles mean AI agents get exactly the context they need without wasteful retrieval.

Multi-Surface Access

Strategy becomes ambient — available in chat, dashboards, integrations, and API calls. Same context, every surface, instantly.

Consistency Guarantee

Speed without accuracy is worse than slowness.
Cached data is never more than a few seconds behind the live database. Every cache entry carries a timestamp, and stale entries are automatically refreshed before serving.

Technical Approach

Three layers of caching, each optimised for a different access pattern.

PostgreSQL Materialised Views

Warm cache. Pre-computed joins across strategic entities, refreshed on schedule. Handles complex relational queries without runtime overhead.

Redis Hot Cache

Hot cache. Serialised context bundles keyed by role and workspace. Sub-millisecond retrieval for the most common access patterns.

Supabase Realtime Invalidation

Cache invalidation via Supabase Realtime subscriptions. When strategy data changes, only the affected cache entries are invalidated — no polling, no delays.

Continue the Series