Midbrain

Building the memory and continual learning layer for AI agents.

Today's agents operate in short loops.

They process inputs, generate outputs, and reset.

They do not accumulate experience.
They do not adapt over time.

We believe intelligence is not just inference —
it is the ability to change through interaction.

We researhed this across real environments:

  • AI agents in games
  • Long-running AI companions
  • Embodied agents in robotics simulation

Different domains. Same limitation: experience is stored, but not internalized.

Memory allows systems to recall the past.

Learning requires systems to update because of it.

Today's systems retrieve information. They do not change behavior.

To learn from experience, a system must update while interacting with the world.

Today's paradigm
collect data retrain redeploy
What's required
experience memory update behavior

Without this, memory is just storage.

SmartSearch is our first step toward this vision. A structured memory retrieval system for agents operating over long horizons. It retrieves the right experience efficiently — because without correct retrieval, learning is impossible.

93.5% LoCoMo
88.4% LongMemEval-S
8.5x Token Efficiency
~650ms CPU Latency

See It In Action

We tested SmartSearch on the Linux kernel (~2GB), comparing it directly against a standard LLM with tool-use (grep, etc.). As tasks get longer, SmartSearch keeps reasoning grounded by ranking the most relevant memories instead of expanding context. By using our index-free semantic search, we bypass the ~0.5TB storage overhead required by traditional semantic indices, delivering stable performance across long execution chains without the bloat.

Benchmark Comparison
System LoCoMo LongMemEval-S
EverMemOS 92.3% 82.0%
Memora 86.3%
MemOS 80.8% 77.8%
Mem0 68.4% 66.4%
Zep 71.2%

We are building systems that improve through use — not retraining cycles.

We are working with a small number of design partners building long-running AI agents.