What we're working on

Axiogram Labs is a private research lab working on one question: what should a neural network remember, and how should it remember it?

We think the answer lives outside the current paradigm of attention, recurrence, and hidden state. We're building what's on the other side — an architecture where memory is separate from computation: persistent, updated under its own rules, and not a byproduct of how a model processes its input.

Why this matters

Current neural networks conflate two different things: the parameters that do computation, and the state that carries information through time. Transformers extend the state by growing their attention window. Recurrent networks compress it into a hidden vector. State-space models improve the compression. All three share the assumption that memory should live inside — or alongside — the computation.

We think that's the constraint worth removing. When memory becomes a separate system with its own structure, different behaviors become possible at long horizons that current architectures cannot reach. Our research program is built around testing where that claim holds, where it breaks, and what it implies for how we train these systems.

Research directions

We are currently investigating:

Contact

Research collaboration, grant inquiries, or technical conversation: research@axiogramlabs.com

For anything else: contact@axiogramlabs.com