← Back to Blog
Thesis

Why Personal Context Is the Missing Primitive for Agents

Published 21 April 2026 · 8 min read

Upcoming product · 2030 vision · not yet in general availability

Quick answer. Every new AI agent starts from zero. That reset is not a feature; it is a missing layer. ChatGPT Memory is per-vendor and not portable. MCP memory servers are per-agent silos. The right shape is a user-owned, consent-scoped vault that any agent can query under explicit consent — and that is the piece GeraMind is trying to be.

The reprompt tax

Count the minutes per week you spend re-telling an AI agent something it should already know: the languages you speak, the cities you live in, the airlines you avoid, the medications you take, the names of your kids, the names of your clients. Every new agent is a blank slate. Every conversation pays a tax.

This is not what agents felt like they would be five years ago. The promise of an AI assistant was that you would onboard it once. Instead we are stuck in an era where the vendor owns the memory, and switching vendors means starting over.

Why vendors built vendor memory first

OpenAI shipped ChatGPT Memory in 2024. Anthropic followed with project-level memory primitives. Google’s Gemini has personal context integration. Each of these is useful — but each is vendor- locked and not portable. If you move from ChatGPT to Claude, your memory stays with OpenAI.

That was the right short-term move for the vendors: fast, shippable, no coordination problem. It is the wrong long-term shape for users.

Why MCP memory servers are close but not quite it

The Model Context Protocol (Anthropic, November 2024) lets agents connect to external memory servers. This is better — the memory can be self-hosted and portable across MCP-compatible agents. But MCP memory servers are per-user per-agent. Consent is coarse (install-time). The granular, purpose-bound "agent X may read preferences relevant to purpose Y for duration Z" primitive is not there yet.

The shape we think is right

A single vault per user, owned by the user, that:

  • Stores structured, categorised personal context (profile, preferences, health, documents, relationships, interactions).
  • Exposes a query layer with (scope × purpose × cap × expiry) consent tokens — not "install this memory server".
  • Enforces data minimisation at the query layer — queries return the minimum needed, not the full category.
  • Logs every read to an audit the user (not the operator) can inspect.
  • Supports export on demand to a portable format, so users can switch providers cleanly.

Why now

Two reasons. First, the regulatory tide is turning — the EU AI Act (in force 2025), GDPR data-portability requirements, and emerging data-agent rights across the UK and US are all pushing toward user-owned agent context. Whoever ships a clean portability story first becomes the default.

Second, agent commerce (see GeraNexus) makes agent context load-bearing in a way it has not been until now. Your agent is going to transact on your behalf. It will need to know your constraints without re-asking every time. The vault is the substrate.

The non-goals

  • We don’t want to train on your context. Your vault is yours.
  • We don’t want to hold the LLM. Use whatever model you like.
  • We don’t want to be yet another agent. We expose the vault; agents consume it.

What goes wrong if we don’t build it

One plausible future: a handful of vendors accumulate all personal context, making switching costs so high that the market stops functioning. Another plausible future: the context layer fragments so badly that agents remain brittle. Neither is great. The third path — a portable, consent-scoped, user-owned vault — is the one worth building.

How it fits the Gera stack

GeraMind plugs into every Gera vertical with explicit per-product consent. Agent commerce via GeraNexus can consume vault data under scoped consent. GeraClinic is the highest-value early integration: medically-relevant context pre-filled into a consultation is minutes saved per visit.

How to help

If you have worked on data sovereignty, federated identity, or consent architecture, we want to talk. Early schema drafts are at /research. The waitlist is open.

Help us design the vault.

Join the waitlist