> You can pretty much draw a parallel line with hardware advancement and the bloating of software.
I do not think it is surprising that there is a Jevons paradox-like phenomena with computer memory and like other instances of it, it does not necessarily follow that this must be a result of a corresponding decline in resource usage efficiency.
This is not an issue in my view. I like the fact that I can download 100 MiB ultra-high resolution TIFF files of scans of photographs from the original negative from the Library of Congress and 24-bit/96kHz FLAC files of captures of 78 RPM records from the Internet Archive. In addition to maintaining completeness and quality of information, one of the main goals of preservation is to guard against further degradation and information loss. You should try to preserve the highest quality copies available (because they contain more information) and re-encoding (deliberate degradation) should only be used to create convenient access copies.
Inferior copies, in addition to being less informative, have the potential to misinform. Only the archivist will enjoy space savings. All the readers who might consult your library in the infinite future will bear the cost.
> ...(e.g. lossless FLAC). This inflates the file size...
This is entirely the wrong view. The file size of a raw capture compressed to FLAC should be thought of as the “true” or “correct” size. It is roughly the most efficient (balancing various trade-offs) representation of sampled audio data that we can presently achieve. In preservation we seek to preserve the item or signal itself and not simply what we might perceive thereof. This human-centric perception view is just wrong. There is data in film photographs which cannot be perceived visually yet can be of interest to researchers and be revealed with digital image analysis tools.
As an example of how much information celluloid can contain see: https://vimeo.com/89784677
(context: he is comparing a Blu-ray and a scan of a 35mm print)
YouTube has gotten so bad that even normal people are complaining about it now. A middle-aged woman who volunteers with me was saying how she did not feel comfortable using YouTube due to the number of inappropriate ads. I ended up giving her links to a few Invidious instances and she loves them even if they are slower and not entirely reliable. She also understood the concept of a front-end without much explaining on my part.
Finance is increasingly reliant on it too, my bank moved their entire system to AWS. The amount of power being handed over to these cloud companies in exchange for “convenience” is astonishing.
A Cloudflare turnstile caused Servo to crash during my testing, just like with Pale Moon earlier this year. They are becoming the new gatekeepers of the Web.
It is also a good warning that the people who typically create and edit Wikipedia articles in your local language do not find the subject interesting or are prevented from effectively documenting it due to the language barrier. I was doing research on the history of a certain illness and one physician I was investigating did not have an English Wikipedia article. Of course this does not mean he did not exist, I found him later on the Russian Wikipedia: https://ru.wikipedia.org/wiki/%D0%90%D1%81%D1%82%D0%B2%D0%B0....
The “over 1 TFLOPS” claim for the M1 appears to be for single precision floats whereas FLOPS performance figures for supercomputers, including the one given for the CRAY-1, are almost always based on double precision (FP64) floats. The double precision FLOPS performance of the M1 would be lower, perhaps half of the single precision performance.
I do not think it is surprising that there is a Jevons paradox-like phenomena with computer memory and like other instances of it, it does not necessarily follow that this must be a result of a corresponding decline in resource usage efficiency.
reply