I'm not suggesting reducing the size of the LibGen collection, I'm thinking along the lines of "I have 2TB of disk space spare, and I want to fill it with as much culturally-relevant information as possible".
If the entire collection were availble as a torrent (maybe it already is?), I could select which files I wish to download, and then seed.
Those who have 52TB to spare would of course aim to store everything, but most people don't.
Just as the proposal in the OP would result in the remaining 32.59 TB of data being less well replicated, my approach has the problem that less "popular" files would be poorly replicated, but you could solve that by also selecting some files at random. (e.g. 1.5TB chosen algorithmically, 0.5TB chosen at random).
I don't think you've deserved the downvotes, and I don't think it's a bad idea either; indeed some coordination as to how to seed the collection is really needed.
For instance phillm.net maintains a dynamically updated list of LibGen and Sci-Hub torrents with less than 3 seeders so that people can pick some at random and start seeding: https://phillm.net/libgen-seeds-needed.php
The concept of an importance score feels very centralized and against the federated / free nature of the site. Towards what end?
If the “importance score” impacts curation, I am strongly against it. Not only is it icky, but how is it different than a function of popularity?