My AI assistant and I share the same brain. Not metaphorically, we literally read and write to the same knowledge base. When I update a project note in Emacs, Nabu (my AI) sees the change. When Nabu logs something it learned, I see it in my daily file. This post is about how it works.
How It Works#
Nabu connects to my self-hosted Matrix server and it has access to my org-roam knowledge base via an MCP server I built. But it’s not just access, it’s integration. Nabu treats org-roam as its source of truth:
- Before answering, search. When I ask about a project or a person, Nabu searches org-roam first. It doesn’t hallucinate details; it looks them up.
- Learning gets recorded. When Nabu learns something worth keeping (a decision we made, a fix we applied), it writes it to the knowledge base.
- Shared memory. We have a context that persists across conversations.
In practice, while writing this post, I asked Nabu to find examples of our shared brain in action. It searched the org-roam daily notes and memory files, looking for instances where it had looked something up to help me. The example it found was that very interaction.
Another example: when I asked Nabu to troubleshoot my camera alert system, I did not explain how it worked. Nabu searched org-roam and found the Camera Alerts Pipeline note with the camera ID mapping, MQTT topics, container names, and alert rules. It diagnosed the issue and fixed the configuration without me having to re-explain my setup.
Similarly, when my Home Assistant dashboard had gone stale, I asked Nabu to fix it up. It found the sensor names, device configurations, and integration details in my notes, then updated the dashboard accordingly.
The Configuration#
The magic is in how you instruct the agent. In OpenClaw, this happens via workspace files that the agent reads at startup.
AGENTS.md tells the agent its role:
## Org-Roam Knowledge Base
Org-roam is your primary knowledge base. Search it before answering
questions about projects, people, or decisions. Update it when you
learn something worth keeping.MEMORY.md establishes the relationship:
## Org-Roam Knowledge Base: My Primary Role
I am the live interface for Don's org-roam second brain. I can read,
search, create, edit, and reorganize notes.The agent internalizes these instructions. It knows where to look and what to do.
The Foundation: Why Emacs#
I haven’t tried every note-taking tool out there. I used TriliumNext for a while. I’ve used Evernote. I looked at Obsidian and a few others, but they either weren’t self-hostable, or they limited you to their UI. I keep coming back to Emacs and org-mode for a few reasons.
First, my notes are just text files. No proprietary database, no cloud lock-in, no wondering if the company will exist in five years. I can read them with cat, grep them, version control them with git. And importantly, LLMs can read them too. When I paste a note into Claude, it just works. No export step, no format conversion.
Second, org-roam adds what plain org-mode lacks: backlinks and a graph structure. When I mention a person or project, it becomes a link. Over time, connections emerge that I didn’t plan. The structure grows organically from the content.
Third, Emacs is programmable in a way that no other tool matches. When I want a new workflow, I write it. When something annoys me, I fix it. The tool bends to how I think, not the other way around.
Finally, there’s data sovereignty. My notes live on my machine and sync via git to my own server. No cloud service has a copy. No company can discontinue access. This matters more to me the longer I do this.
The Pieces#
Two components make the shared brain possible:
org-roam-second-brain (Emacs Package)#
Vanilla org-roam is great, but I wanted more structure. This package adds structured node types (people, projects, ideas, admin tasks), each with its own template. It adds semantic search via vector embeddings stored in the org files themselves, generated locally with no cloud APIs. And it adds proactive surfacing: a daily digest that shows active projects, stale items, pending follow-ups, and dangling links.
org-roam-mcp (Python Server)#
The MCP server exposes 30 tools via JSON-RPC: search (semantic, contextual, keyword), CRUD operations, task state management, and surfacing functions. It runs locally on port 8001. Any tool that can make HTTP requests can interact with my knowledge base, including AI agents.
curl -X POST http://localhost:8001 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call",
"params":{"name":"semantic_search",
"arguments":{"query":"container networking"}}}'How to Set This Up#
If you want to replicate this, the process is straightforward:
Install the Emacs package via straight.el:
(straight-use-package
'(org-roam-second-brain :host github :repo "dcruver/org-roam-second-brain"))
(require 'org-roam-second-brain)
(require 'org-roam-api)Run an embedding server: either Infinity with Docker (docker run -d -p 8080:7997 michaelf34/infinity:latest --model-id nomic-ai/nomic-embed-text-v1.5) or Ollama (ollama pull nomic-embed-text).
Start the MCP server:
pip install org-roam-mcp
export EMACS_SERVER_FILE=~/emacs-server/server
org-roam-mcp --port 8001Then configure your agent to use org-roam as its source of truth. The specifics depend on your agent framework. See the full setup guide for details.
Closing Thoughts#
This setup is not for everyone. It requires comfort with Emacs, willingness to run local services, and some patience for configuration. But for me, it solves real problems: my notes are mine forever, in a format that will outlast any company. I can find things by meaning, not just keywords. My AI assistant and I share the same context. The knowledge compounds over time.
If any of that resonates, the code is on GitHub. For the AI integration, check out OpenClaw. Take what’s useful, ignore the rest.