Can you clarify a little bit on what you mean by your question?
One area of research is extracting an effective type theory from data that is viewed as a semantic model, eg sensor data of a phenomenon would lead to a type theory describing it.
You’re essentially taking a TDA persistent homology/covering and reinterpreting that through the lens of homotopy type theory to “decompile” your data. There’s some early results, eg connecting convolutions and type division.
But that’s overall at really early stages of research.
> effective type theory from data that is viewed as a semantic model,
Well, much simpler stuff.
You have a logistic dataset of objects/GPS/time. Fine. Now you add historic truck data which is location time series. If is not obvious how you can learn the delivery times between 2 items.
You need human expertise, and design a way to extract usable routes, and also solve multi-hop routes, and then maybe you can learn typical speed between 2 items.
It is doable. But it not with a generic "multi-modal architecture".
None published; I’m working on a white paper about type division in the abstract case this quarter, once I finish up the second one on shape algebras.
Type division is like convolution from ML, which is why we can recognize “shapes of shapes” and undo the product structure. (And arguably, another avenue towards arriving at the manifold hypothesis.)
I’m currently working on the “easy” direction of encoding the type statements to matrices, with the hope most steps are reversible. (So far, so good.)
One area of research is extracting an effective type theory from data that is viewed as a semantic model, eg sensor data of a phenomenon would lead to a type theory describing it.
You’re essentially taking a TDA persistent homology/covering and reinterpreting that through the lens of homotopy type theory to “decompile” your data. There’s some early results, eg connecting convolutions and type division.
But that’s overall at really early stages of research.