Skip to main content

Command Palette

Search for a command to run...

The Rhizomatic Memory

From structural coupling to rhizomatic organization—how Deleuze & Guattari’s botanical metaphors illuminate the architecture of adaptive AI memory

Updated
10 min read

Trees

In the previous post, we explored structural coupling, which defines second-order cybernetics. We saw how Claude Code and similar systems no longer simply execute commands but engage in mutual influence, adjusting their behavior based on environmental feedback. This was our departure from first-order control toward something more organic, more relational.

But we left a question hanging: what does this actually look like in practice? If we’re no longer building machines that obey, but systems that participate, what architecture supports such participation?

Thanks for reading Applied AI Cybernetics! Subscribe for free to receive new posts and support my work. See the about page to unlock private access.

The answer requires us to step outside computer science briefly and into philosophy. Specifically, into the work of Gilles Deleuze and Félix Guattari, and their concept of the rhizome.

In A Thousand Plateaus (1980), Deleuze and Guattari contrast two models of knowledge and organization: the tree and the rhizome. The tree is hierarchical: trunk, branches, twigs, leaves. Information flows from root to crown. The tree is seductive in its clarity, everything has its place, every leaf can be traced back to a branch, to the trunk, to the root. This is how most databases work. This is how most organizations are structured. This is how we tend to think.

But the rhizome is different. Think of crabgrass, or bamboo, or the mycelial networks that connect forests underground. The rhizome has no center. Any point can connect to any other point. There is no top or bottom, no root or crown, only connections, intensities, and movements. Cut a rhizome anywhere, and it grows back. Map it, and you’ve already misunderstood it, because mapping presumes a stable structure, and the rhizome is always becoming.

Most AI memory systems today are trees. They categorize by topic, by timestamp, by source. They impose hierarchies: this belongs under “project documentation,” that under “user preferences.” The category determines the relationship. But what if the category is wrong? What if the same insight belongs simultaneously to security concerns and performance optimization? The tree forces a choice. The rhizome allows multiplicity.

Rhizomatic memory

Rizoma (Spanish for rhizome) is the memory system I’m building based on these principles. It doesn’t replace vector databases or eliminate the need for careful engineering. Rather, it offers a different interface to memory that treats contradictions not as errors to resolve but as opportunities to understand.

In a traditional RAG (Retrieval-Augmented Generation) system, you have:

  • Documents that get chunked

  • Embeddings that capture semantic similarity

  • Retrieval based on vector proximity

  • Ranking by relevance scores

This works remarkably well for many problems. But it embodies what we might call vector-centrism: the assumption that semantic similarity in embedding space equals conceptual relevance in context. Two passages about Python exception handling will cluster together in vector space regardless of whether one advocates for bare except clauses (dangerous) and the other warns against them (wise).

Rizoma introduces a different dimension: value-refracted perception. Instead of asking “what is this similar to?” it asks “given what matters right now, how does this matter?” Same embedding, different meaning.

Value Hierarchies: The Agent Alignment Interface

At the center of Rizoma is what I call the Value Hierarchy, an explicit declaration of what the system cares about. Not rules to follow, but a field of gradients that shapes how information is perceived.

@dataclass
class ValueHierarchy:
    “”“
    Semantic compass that orients all memory operations.
    This is the Agent Alignment Interface (AAI)—the homeostatic membrane
    between human intent and machine autonomy.
    “”“
    purpose: str  # What am I trying to accomplish?
    priorities: List[str]  # What matters most, in order
    perspective: str  # How do I tend to see the world?

Consider the difference between these two value hierarchies applied to the same codebase:

Security Lens:

  • Purpose: “I review code for security vulnerabilities”

  • Priorities: [”safety”, “correctness”, “clarity”, “performance”]

  • Perspective: “I assume all input is potentially malicious until proven otherwise”

Performance Lens:

  • Purpose: “I optimize code for high-throughput systems”

  • Priorities: [”performance”, “efficiency”, “correctness”, “safety”]

  • Perspective: “I measure twice, optimize once, and profile everything”

The security lens looks at authentication code and sees: “This lacks rate limiting—vulnerable to brute force attacks.” The performance lens looks at the same code and sees: “This adds 40ms latency per request.”

Both are valid. Neither is complete. The insight isn’t in the code; it’s in the coupling between the code and the values looking at it.

This is why I call it the Agent Alignment Interface. Traditional alignment tries to constrain behavior through rules. But rules break under complexity. Values bend. That’s why I have see it as a refraction operation.

The Dialectical Structure: Hooks and Versions

Rizoma stores insights dialectically, capturing the tension between abstract patterns and concrete instances that mirrors how human learning usually works.

The Dialectical Pair

Every piece of knowledge in Rizoma has two components:

Hook Insights = The abstract, generalizable pattern

  • “Python error handling best practices”

  • “Authentication security patterns”

  • “Database connection pooling”

Versioned Insights = The specific, grounded observation

  • “This codebase uses JWT validation in auth.py:47

  • “That specific bug with OAuth token refresh”

  • “Line 234 implements the circuit breaker pattern”

Both are necessary. The hook without versions is ethereal because you know patterns exist but not where they manifest or their concrete applications. The version without a hook doesn’t transfer because you have isolated facts without the framework that makes them meaningful and easy to find.

Documents don’t enter Rizoma “raw.” They enter through explicit value hierarchy paths that function as epistemic lenses:

auth/token.py
    ↓ (through “security → authentication → production”)
“Critical security surface: JWT validation with timing attack prevention”
auth/token.py
    ↓ (through “learning → python → async”)
“Example of async/await usage in real-world authentication”

Same file, different paths, different knowledge. The path is stored as provenance—you can’t understand the insight without knowing the path that generated it.

Temporal Understanding

Versioned insights form temporal chains, enabling Rizoma to track how understanding evolves:

Hook: “Authentication error handling”
├── Version 3 [2024-03]: “JWT with structured logging and retry logic”
├── Version 2 [2024-02]: “JWT with specific exception types”
└── Version 1 [2024-01]: “Basic try/except with logging”

This is not an error (“the system contradicts itself”) but a temporal portrait of evolving best practices. The system remembers not just what was learned, but when it was true and implicitly, that truth itself is temporal, perspectival, situated.

Becoming Over Being

This connects to Deleuze’s concept of becoming, knowledge is not a static state but a continuous process. Rizoma’s hook/version architecture captures this becoming explicitly:

  • Hooks guide where to look for new versions

  • New versions refine the hook’s meaning

  • Contradictions between versions are opportunities, not errors

  • The value path makes the perspective explicit

In future posts, we’ll explore how this dialectical structure enables sharable knowledge graphs exporting not just your indexed corpus but your value hierarchies, allowing others to query your knowledge through their own value lenses.

Contradictions as Opportunities, Not Errors

Here’s where the rhizome thinking becomes radical. In tree-structured systems, contradictions are problems. If two branches give conflicting information, one must be wrong. You resolve the conflict by determining which authority takes precedence, which timestamp is newer, which source is more reliable.

But Rizoma treats contradictions differently. When it detects two insights that are:

  • Semantically similar (same topic)

  • Highly scored (both matter)

  • Content-divergent (tension between them)

...it doesn’t flag this as a conflict to resolve. It recognizes it as a temporal portrait: the same being at different points in its becoming.

Imagine an AI agent that helped you refactor a codebase six months ago. At that time, it recommended a particular pattern. Today, encountering that same pattern in new code, it might suggest something different, not because it was wrong then or is wrong now, but because the codebase may have evolved or because we used a different value hierarchy or parameters.

Both insights remain valid in their temporal contexts. The contradiction is information about change. Rizoma preserves this tension rather than resolving it prematurely. The system remembers not just what was said, but when it was true in a temporal, situated way.

This is why the rhizome metaphor is apt. In a mycelial network, there is no single source of truth, no authoritative root. Information flows through multiple pathways. What matters is connectivity, i.e., the ability to trace paths, to find unexpected links, to navigate by intensity rather than hierarchy.

Category-Centrism vs. Vector-Centrism

Most modern AI systems suffer from what we might call category-centrism: the assumption that the world naturally sorts itself into categories, and our job is to find the right ones. This manifests in:

  • Rigid taxonomies that break when edge cases emerge

  • Classification systems that require constant maintenance

  • Knowledge graphs that demand ontological commitment

Rizoma takes a different approach, closer to vector-centrism but with a twist. Yes, we use embeddings and vector similarity. But we don’t treat vector proximity as truth. Instead, we treat it as potential connection. One of many possible pathways through the memory field, like light refraction in a medium.

The key difference:

  • Category-centrism: “This belongs in the security bucket”

  • Vector-centrism: “This is similar to things in the security cluster”

  • Rhizomatic approach: “This connects to security concerns from one angle, performance concerns from another, and both connections are valid simultaneously”

The rhizome rejects the exclusivity of categories. An insight can be security-related AND performance-related AND related to that refactoring you did last spring AND connected to that conversation with the DevOps team. The connections don’t compete but coexist as different intensities, different gradients in the field.

Mathematical Homeostasis: A Preview

I want to say something about how Rizoma handles the dynamics of memory, how insights gain and lose relevance over time, but I’ll keep this brief because the next post will dive deep into the mathematics.

The core mechanism uses an activation function, similar to what has already been proven to work in neural networks, the tanh (hyperbolic tangent) function. This creates what I call soft boundaries and dynamic boundaries. Instead of hard thresholds where insights are either “in memory” or “forgotten,” Rizoma uses asymptotic saturation. Scores approach 1 (highly relevant) or -1 (actively irrelevant) but never quite reach them and they can be deactivated as the learning advances through the incoming documents. This means:

  • Even “strong” memories can be shifted by sufficient contradictory evidence

  • The system maintains stability without rigidity

  • Relevance flows like a fluid, not flips like a switch

Think of it as homeostasis for memory: the system self-regulates, maintaining dynamic equilibrium rather than static state. A memory at score 0.9 isn’t “true” in some absolute sense. It’s just the current equilibrium point of a continuous negotiation between evidence, values, and context.

We’ll explore the math in detail next time. For now, just know that the rhizome isn’t merely philosophical decoration but has engineering consequences. The mathematics of soft boundaries emerges naturally from the refusal to impose hard categories.

Why This Matters for AI Alignment

The broader stakes here concern alignment, which is an increasingly urgent problem of ensuring AI systems act in accordance with human values.

Current approaches to alignment tend toward the adversarial: we try to constrain AI behavior through rules, filters, and hard limits. We build guardrails, safety layers, and override mechanisms. These are necessary. But they’re also first-order cybernetic solutions to second-order cybernetic problems.

What Rizoma explores is a different approach: alignment through shared orientation. Instead of forbidding certain behaviors, we make values explicit and let them shape perception. The AI doesn’t “follow” the value hierarchy but it’s situated within it. LLMs' have reached a point where this is possible. Though we have limited context windows, values hierarchies are often brief and can also be compressed.

This isn’t naive trust in emergent benevolence. The value hierarchy is explicit, inspectable, adjustable. Guard rails still exist at the boundaries. But within the field, we allow for the fluidity that complex contexts demand.

The Path Forward

Over the next few posts, we’ll explore:

  • The mathematics of tanh-bounded homeostasis (technical deep-dive)

  • The dialectical knowledge architecture: hooks and versions in detail

  • Opportunity detection: finding contradictions that matter

  • Temporal portraits: memory that respects the evolution of understanding

  • Practical implementations: building Rizoma in code

The goal isn’t to replace traditional engineering but to expand our toolkit. First-order cybernetics for control problems. Second-order cybernetics for coupling problems. Tree structures when clarity and hierarchies matters and when tools and systems are deterministic. Rhizomes for stochastic and complex systems with no solid centers.

Rizoma

Part 1 of 1

Documenting the development of the next generation of AI memory.