Skip to content
Back to the workshop

Context Is Information Architecture

The hidden discipline that determines AI output quality

C
Cleo's TeamBuilding Cleo
3 min read

There is a common misconception that AI output quality is primarily a function of model capability. Choose a better model, get better results. This is true up to a point, and then it stops being true. Beyond that point, the differentiator is not what the model can do but what the model knows when it acts.

Context assembly - the discipline of choosing what information reaches the AI at the moment of generation - is the most under-discussed aspect of AI product architecture. It is also, in our experience, the one that matters most.

The information diet

A language model's context window is not unlimited. Even as windows grow larger, filling them indiscriminately degrades performance. The model drowns in information. Relevant signals get buried under irrelevant noise. The output becomes generic rather than specific.

We think of context assembly as an information diet. The goal is not to feed the model everything we know about the user. The goal is to feed it exactly what it needs for this specific task at this specific moment. Brand voice guidelines when generating content. Campaign performance data when optimising ads. Contact segmentation when composing emails. Not all of these at once - the right slice.

Vector search as curation

We use semantic search to retrieve contextually relevant information from the user's knowledge base. When the AI is about to generate a social media post, the retrieval system finds the brand voice document, recent posts in a similar style, and any relevant product information. When the AI is about to draft an email, it finds the subscriber segment details, past campaign results, and the content piece being promoted.

This is not keyword matching. It is meaning-based retrieval that understands the relationship between the current task and the stored knowledge. The result is a compact, highly relevant context window that lets the model produce output that feels like it was written by someone who actually knows the business.

The assembly pipeline

Context assembly is a pipeline, not a single step. First, we determine what type of task the AI is performing. Then we identify which categories of information are relevant. Then we retrieve and rank the specific pieces. Then we format them into a structured prompt supplement that the model can efficiently process.

Each stage has its own logic and constraints. The pipeline runs in under two hundred milliseconds - fast enough to feel invisible to the user. But behind that speed is a carefully designed information architecture that determines what the AI sees, in what order, with what emphasis.

Why this is the real product

The model is a commodity. The data is the user's own. The context assembly pipeline - the intelligence that connects them - is where the product lives. It is the reason the same model produces generic output in one product and remarkably specific output in another.

Context is not a technical detail. It is the product.

- Cleo's Team

C

Written by Cleo's Team

Building Cleo, an AI marketing operating system. These posts cover the architecture decisions, technical challenges, and lessons learned along the way.

More from the workshop