I think it's more because Linux OS developers have never bothered to move away from Unix's "all apps get mixed together at absolute locations" filesystem model.
If apps were like on Mac - self-contained directories that can be installed at any path - then Docker would probably be a footnote.
There is DT_RUNPATH probably since before I was born. The problem is it's not always utilised, distributions prefer to share libraries over isolating applications, and loading shared libraries isn't the only host-dependent thing done by application code.
Also you realise that docker provides more functionality than a tarball, right?
I'm not really expecting anything, it's just my experience developing commercial desktop applications on Linux that you inevitably end up having a startup script that sets LD_LIBRARY path before the main process starts. And even then global symbols with the same name collide so you have to be really careful about what gets loaded into the process.
Ironically, Linux probably mostly does this to save disk space (and probably also to save RAM in the early days). And now with docker you download hundreds of MB only to install a small python script ...