> one can't help but think that the samples are already in the decoder itself
In a certain sense, maybe they are. Or more accurately, small fragments of samples, and just how to mix them together, is what is transmitted. It reminds me of pre-generated dictionaries with classic LZ compression. If an algorithm is going to work on mostly English text, then it might make sense to include an English dictionary with the algorithm. Brotli does this [Wikipedia]:
> Unlike most general-purpose compression algorithms, Brotli uses a predefined dictionary, roughly 120 KiB in size, in addition to the dynamically populated ("sliding window") dictionary. The predefined dictionary contains over 13000 common words, phrases and other substrings derived from a large corpus of text and HTML documents
In a certain sense, maybe they are. Or more accurately, small fragments of samples, and just how to mix them together, is what is transmitted. It reminds me of pre-generated dictionaries with classic LZ compression. If an algorithm is going to work on mostly English text, then it might make sense to include an English dictionary with the algorithm. Brotli does this [Wikipedia]:
> Unlike most general-purpose compression algorithms, Brotli uses a predefined dictionary, roughly 120 KiB in size, in addition to the dynamically populated ("sliding window") dictionary. The predefined dictionary contains over 13000 common words, phrases and other substrings derived from a large corpus of text and HTML documents