I was trying to understand the significance of this. There is a blog post here[0]. (I had a brief glance, please correct me if I'm wrong) It seems to be a framework allowing neural networks (and layers of neural networks) to be composed together more easily.
It's not completely clear to me what the advantages are without digging in, I assume to simplify/standardise the interface between the layers?
Which is true, and trivial. This important points for understanding (what kind of input? what kind of output? why sets? what's 'Neural' about functions?) is all missing.
> A Neural Module’s inputs/outputs have a Neural Type, that describes the semantics, the axis order, and the dimensions of the input/output tensor. This typing allows Neural Modules to be safely chained together to build applications, as in the ASR example below.
It's not completely clear to me what the advantages are without digging in, I assume to simplify/standardise the interface between the layers?
[0] https://devblogs.nvidia.com/neural-modules-for-speech-langua...