> binary diffs aren't worth the complexities they introduce anymore
Noob question: Why are binary diffs impractical?
Is it because the (compiled) object code layout dances around too much? If true, isn't that fixable? Meaning: make the order more stable, to minimize the size of the diffs?
I recall a recent story/post about boosting runtime performance by optimizing object code layout. Sorry, I can't refind that article.
If true, couldn't the internals of released code be "sorted" to better enable binary diffings? Maybe the layout optimizer step would minimize the variability enough without requiring a resort.
A fun experiment would be to take a series of releases, run that layout optimizer, and then try the binary diffing again.
--
Didn't Google publish some research, maybe 10 years back, about better binary diffing for publishing updates? Apologies, but I sorta assumed it had become the norm.
Yep, and they use Courgette for all of their Chrome/Chromium updates. It's why no one ever sees long "Downloading update..." progress bars in that browser anymore. 100KB binary diffs are typical. Blink (no pun intended) and you'd miss the download.
Not a noob question... I don't know, but was assuming they were not worth the hassle from experience of using them, having lived through an era of binary diffs for old games where applying them seemed to be very computationally expensive at the time - sometimes to the point that it seemed faster to download a whole release, I assumed it was even more expensive to generate them - Those experiences may not be valid anymore in light of faster hardware or better algorithms, but the gains may also be too minimal when compared to highly granular packaging systems that various Linux distributions use these days.
Sibling comment RE chrome is an interesting one, I can imagine in the specific case of very large and frequently updated binaries like chrome it would still be beneficial to use binary patching.
Noob question: Why are binary diffs impractical?
Is it because the (compiled) object code layout dances around too much? If true, isn't that fixable? Meaning: make the order more stable, to minimize the size of the diffs?
I recall a recent story/post about boosting runtime performance by optimizing object code layout. Sorry, I can't refind that article.
If true, couldn't the internals of released code be "sorted" to better enable binary diffings? Maybe the layout optimizer step would minimize the variability enough without requiring a resort.
A fun experiment would be to take a series of releases, run that layout optimizer, and then try the binary diffing again.
--
Didn't Google publish some research, maybe 10 years back, about better binary diffing for publishing updates? Apologies, but I sorta assumed it had become the norm.