Open Brain System
A reference implementation of the open-source, AI-integrated brain system: pgvector, MCP, and Supabase wired together for human-AI memory.
The Definition Worth Defending
Defining the AI-Integrated Memory Architecture
An open brain is a user-owned, database-backed knowledge system that stores personal thoughts and context as vector embeddings. Unlike traditional knowledge management, an open brain is designed for machine consumption first. It utilizes the Model Context Protocol (MCP) to allow any AI agent—such as Claude or ChatGPT—to query a private database without relying on proprietary SaaS intermediaries.
Open Brain vs. Building a Second Brain
This architecture differs fundamentally from Tiago Forte's "Building a Second Brain" (BASB) methodology. BASB is a human-centric workflow focused on the capture, organization, and distillation of notes for human retrieval. Tools like Obsidian or Notion facilitate this process through folders and tags, but they remain static silos unless manually queried by a user.
An open brain replaces manual curation with semantic search. While an Obsidian vault requires the user to remember where a note lives or use keyword searches, an open brain uses pgvector to enable AI agents to retrieve relevant context based on mathematical proximity in a vector space. The shift is from note-taking for humans to context-provisioning for AI.
An open brain is not a digital notebook; it is a persistent, agent-readable memory layer that decouples personal data from the LLM provider.
Why 'Open' Matters in 2026
Data Gravity and Sovereignty
Personal memory is the most high-leverage asset an individual owns. Relying on SaaS vendors for this layer creates a dangerous abstraction where the user's cognitive history is subject to pricing changes, censorship, or platform death. An open brain ensures data gravity remains with the user by utilizing self-hosted or managed open-source databases.
The Role of MCP
The Model Context Protocol (MCP) serves as the universal interface. By implementing MCP, a user avoids tool lock-in; if a superior LLM replaces current market leaders, the new client simply plugs into the existing MCP server to access the same memory bank. This contrasts with proprietary solutions like Supermemory.ai or SuperMemory, which wrap data in closed ecosystems.
Infrastructure Economics
The cost of maintaining an open brain is negligible compared to subscription-based AI memories. A stack utilizing Postgres and pgvector can handle 50,000 entries for under $10 per month—and often as low as $0.30 on lean configurations. Open-source projects like Khoj demonstrate the viability of this approach by prioritizing local or user-controlled indexing over closed cloud silos.
| Feature | Proprietary AI Memory | Open Brain (MCP/pgvector) |
|---|---|---|
| Data Ownership | Vendor-controlled | User-owned (Postgres) |
| Interoperability | API-locked | Universal via MCP |
| Cost Structure | Monthly Subscription | Infrastructure Cost (Low) |
The Stack Worth Using
The Canonical Technical Stack
A production-ready open brain relies on a specific set of primitives to ensure low latency and high retrieval accuracy. Supabase provides the ideal foundation, offering managed Postgres, authentication, and storage in a single package. The core of the system is pgvector, an extension that allows Postgres to store and query embeddings using cosine similarity or Euclidean distance.
Integration and Retrieval
To make this data accessible, an MCP server acts as the bridge between the database and the AI client. For the frontend, developers typically use Next.js for a dashboard view or simple HTML for lightweight interaction. The embeddings themselves are generated via APIs like OpenAI's text-embedding-3-small or open-source alternatives such as Nomic Embed for those requiring full local privacy.
Database Implementation
Implementing an open brain requires enabling the vector extension and defining a table that can store both the raw text and its corresponding embedding vector. The following SQL demonstrates the basic setup:
-- Enable the pgvector extension
CREATE EXTENSION IF NOT EXISTS vector;
-- Create a table for personal memories
CREATE TABLE brain_memories (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
content text NOT NULL,
embedding vector(1536), -- 1536 dimensions for OpenAI embeddings
created_at timestamp with time zone DEFAULT timezone('utc'::text, now())
);
-- Create an index for fast semantic search
CREATE INDEX ON brain_memories USING hnsw (embedding vector_cosine_ops);
This schema allows the AI to perform a similarity search by calculating the distance between a user's current query embedding and the stored vectors in the brain_memories table.
What This Site Covers
Navigation Guide
This site serves as a technical manual for deploying and optimizing an open brain. The content is structured to move from theoretical foundations to concrete implementation.
- What is an Open Brain?: A deep dive into the architecture.
- How to Build Your Own: Step-by-step deployment guides.
- Open Brain vs. Obsidian: Analysis of AI-native memory vs. manual note-taking.
- MCP Integration Guide: Connecting your database to Claude and ChatGPT.
- Tools Roundup: Comparison of embedding models and hosting providers.
For those seeking the opinionated reference implementation, visit novcog.dev.