I've been thinking a lot about Software Archeology, trying to envision what that might be. I think we don't even have the tools yet to do a Site Survey. Somehow I can't fully envision what those tools would look like, but this is my best attempt to describe the vague picture in my head.
I'm seeing it being at least a bit feasible with package managers that allow the ability to config/build packages and install them locally according to needs by the developer .. something like Gobolinux with its bundle garden, combined with a configuration-management tool that can build every package with every possible configuration ..
Bit of a goose, though. As in, not sure it'll fly in its current (scrambled eggs) state. Perhaps a rewrite is really what's needed .. or maybe HaikuOS can deliver this without much ruffling of feathers?
Imagine if the provenance of every code line or machine instruction was tracked all the way from what the human wrote to what the machine executes. Combined with data flow analysis, itself combined with runtime analysis via a feedback process like Profile Guided Optimization.
The outcome would be somewhat like a package management tool, but one which can subdivide & prune a package even if the original author wrote it monolithically.
It could also provide feedback in the IDE showing which lines are "dead code" that don't contribute to the features which you've declared (via tests) are important.
Visual Studio's Code Architecture Diagrams are a good idea for how this could look and work, I think. Its basically just missing a "Crop all code outside this dependency chain" for any given function
oooohhh I love that idea. I wonder if this could be utilized for npm.