We solve that problem in a different manner. Our knowledge system allows the user to define the AST token based pattern for the shape of the text to extract from the documents. This is applied on first read and creates a queryable side channel interface to the document that acts both as attention surface for understanding and as an output surface (COW, emits an edge layer over the source document on charge) for agency.
It's like a regex but applied to the structure of the document itself that generates its own control plane.
The LLM has full access to the document or to the control plane, in either mode so we can target specific meanings to focus context or let it explore to find new patterns
I think the biggest benefit is bandwidth more so than efficiency. This gives you multiple streams to mux which and a means to control their mixing.
The biggest innovation I think may have been accidental. The doubly stochastic matrix implements conservation on the signal stream.
Treating the signal like the information it is as we do in any other domain is crucial for maintaining its coherence. We don't allow a network router to generate more packets than it receives for the same reason.
Are you aware that there are certain members of your very own species that are as intelligent as you or I, who lack those qualia.
Non standard cognitive architectures are already coherent. Even if they were, why do you think qualia cannot be replicated with a similar signal to semantic meaning? Are there additional dimensions that we can feel that we've never talked about or more importantly, written down?
We propose SyneState: a communication prosthetic built on DeepSeek's Manifold-Constrained Hyper-Connections (mHC) architecture. By parameterizing cross-channel mixing as a doubly-stochastic matrix constrained to the Birkhoff polytope, we can: (1) induce machine synesthesia—stable, tunable cross-modal binding between latent streams; (2) learn personalized binding matrices that approximate individual cognitive architectures; and (3) translate between compression levels—expanding high-compression encodings into explicit single-channel representations and vice versa.
For mHC implementers, SyneState is a direct application of manifold-constrained mixing to cross-modal binding. For cognitive science and clinical researchers, it offers a candidate prosthetic for the double-empathy problem: bridging communication gaps not by "fixing" either party, but by learning the translation between different cognitive compression schemes.
All required components—multi-stream residuals, Sinkhorn projection, multimodal attention heads—exist in production stacks today. This is integration work, not research.
Is art then just the outcome? The artifact that was produced?
What's your criteria then for who is allowed to produce art? If allowing everyone to create it lessens its value such that it becomes worthless, there must be a cutoff.
If your goal is to ensure the continuity of human expression, limiting who is allowed to create art and narrowly defining art to great works kind of misses the point.
Two systems are isomorphic when they admit the same morphisms—when the set of valid transformations applicable to one equals those applicable to the other.
If Hom(X, DNS) ≅ Hom(X, Filesystem) for all relevant X, then DNS ≅ Filesystem.
It's like a regex but applied to the structure of the document itself that generates its own control plane.
The LLM has full access to the document or to the control plane, in either mode so we can target specific meanings to focus context or let it explore to find new patterns
reply