Skip to content
Back to the workshop

Security as Architecture

Why security in AI products is a structural concern, not a feature checkbox

C
Cleo's TeamBuilding Cleo
3 min read

The security posture of most software products comes down to authentication and authorisation - who are you, and what are you allowed to do? In AI products, there is a third dimension: what can the AI be tricked into doing?

When natural language is the interface and tool calls are the mechanism, every input is adversarial by default. Not because your users are malicious, but because the boundary between a legitimate request and an injection attack is harder to define when the input medium is human language.

Defence in depth

We do not rely on any single layer for security. Each boundary in the system enforces its own guarantees independently.

At the API boundary, every parameter is validated through typed schemas. User identifiers must match expected formats. Strings have length limits. Enumerations are restricted to valid values. This validation catches malformed inputs before they reach any business logic.

At the service boundary, every operation verifies that the requesting user belongs to the organisation they are operating on. This check happens in the service layer, not in the API layer, because services are the unit of trust. Even if an API endpoint is misconfigured, the service will reject unauthorised operations.

At the data layer, row-level security policies ensure that queries can only access data belonging to the requesting user's organisation. This is the final guarantee - even if everything above it fails, the database itself will not return data the user should not see.

The AI-specific surface

AI products have security considerations that traditional software does not. The AI constructs tool calls from natural language, which means the parameters are not pre-validated by a form or a typed API client. The AI might construct a tool call with a subtly wrong parameter - not because of an attack, but because of a misunderstanding.

Every tool call passes through the same validation pipeline that a manual API call would. The AI gets no special privileges. Its tool calls are validated, authorised, and scoped exactly as if a user had submitted them through the interface. This means the AI cannot accidentally bypass security checks, even if the underlying model is confused about its permissions.

Isolation as default

Multi-tenant isolation is not a feature we built on top of the platform. It is the foundation the platform is built on. Every query, every tool call, every piece of generated content is scoped to an organisation. There is no "admin mode" that crosses boundaries. There is no query that can accidentally return another organisation's data.

This isolation extends to AI context. When the AI assembles information for a conversation, it can only retrieve knowledge that belongs to the user's organisation. The vector search, the knowledge retrieval, the context assembly - all scoped. The AI literally cannot see another organisation's data.

Why this matters

Security in an AI product is not about preventing attacks. It is about building a system where the wrong thing cannot happen, even under conditions you did not anticipate. Structural security - security that is built into the architecture rather than added as a layer - is the only approach that scales with the unpredictability of AI systems.

- Cleo's Team

C

Written by Cleo's Team

Building Cleo, an AI marketing operating system. These posts cover the architecture decisions, technical challenges, and lessons learned along the way.

More from the workshop