I've always wondered, what makes some DSP algorithms used in music better than others? The plate reverb on an early Zoom device is nowhere near as good as the plate reverb on an Eventide H7600. A Boss guitar pedal digital delay does not sound as good as the Roland SDE-3000 digital delay. How can a simple DSP implementation of an echo sound different between different devices? Is there more engineering to reduce aliasing? Is the gain staging better? Do more expensive implementations include some impulse response tonal and reverb shaping? Does the quality of the analog parts of the circuits make a big difference? There seems to be a gap in my understanding between how basic DSP is implemented and how musically superior DSP is implemented. This seems to be something of a trade secret but enough companies have now cracked the problem that there must be some public knowledge about where the differences in quality come from.
It also suggests that basic DSP theory is not complete, that there should be psychoacoustic or psychoperceptual extensions to it. This could have implications for machine learning which uses DSP to derive input features. In particular, to get ML algorithms that respond to environments the way people do, extracting features from signals using basic DSP may not be good enough. Psychoperceptual DSP may also be needed. For example, as people get older many experience hearing loss which can be partially modeled as bandwidth limiting. If you train your digital voice assistant ML algorithms using the full microphone bandwidth it will be reacting to signals that the people speaking can not even hear. Hence those people will never make any attempt to modulate their voice in those upper frequency ranges, sending false/noisy signals to the DSP and hence the ML. So the voice assistant may work better for young people and worse for older people. Maybe that's not considered part of DSP but is psychoperceptual modeling. However, DSP is often considered as presenting a kind of baseline truth, when reality can be subtly different.
Anyway I'd really like to know what turns basic DSP into great DSP in the musical realm. I suspect there are similar differences in other areas, such as digital camera image processing.
It also suggests that basic DSP theory is not complete, that there should be psychoacoustic or psychoperceptual extensions to it. This could have implications for machine learning which uses DSP to derive input features. In particular, to get ML algorithms that respond to environments the way people do, extracting features from signals using basic DSP may not be good enough. Psychoperceptual DSP may also be needed. For example, as people get older many experience hearing loss which can be partially modeled as bandwidth limiting. If you train your digital voice assistant ML algorithms using the full microphone bandwidth it will be reacting to signals that the people speaking can not even hear. Hence those people will never make any attempt to modulate their voice in those upper frequency ranges, sending false/noisy signals to the DSP and hence the ML. So the voice assistant may work better for young people and worse for older people. Maybe that's not considered part of DSP but is psychoperceptual modeling. However, DSP is often considered as presenting a kind of baseline truth, when reality can be subtly different.
Anyway I'd really like to know what turns basic DSP into great DSP in the musical realm. I suspect there are similar differences in other areas, such as digital camera image processing.