Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Highlight (benchmark of Perl source code):

    The results follow:
    xz -T16 -9 -k - 2'056'645'240 bytes (c=12m09s, d=4m40s)
    bzip2 -9 -k   - 3'441'163'911 bytes (c=17m16s, d=9m22s)
    bzip3 -b 256  - 1'001'957'587 bytes (c=7m10s,  d=4m6s? Unclear on source page)
    bzip3 -b 511  -   546'456'978 bytes (c=7m08s,  d=4m6s? Unclear)
    zstd -T12 -16 - 3'076'143'660 bytes (c=6m32s,  d=3m51s)
edit: Adding times and compression levels


Did a test on a 1.3G text file (output of `find -f -printf ...`); Macbook Pro M3 Max 64GB. All timings are "real" seconds from bash's builtin `time`.

    files.txt         1439563776
    bzip2 -9 -k       1026805779 71.3% c=67 d=53
    zstd --long -19   1002759868 69.7% c=357 d=9
    xz -T16 -9 -k      993376236 69.0% c=93 d=9
    zstd -T12 -16      989246823 68.7% c=14 d=9
    bzip3 -b 256       975153650 67.7% c=174 d=187
    bzip3 -b 256 -j12  975153650 67.7% c=46 d=189
    bzip3 -b 511       974113769 67.6% c=172 d=187
    bzip3 -b 511 -j12  974113769 67.6% c=77s d=186
I'll stick with zstd for now (unless I need to compress the Perl source, I guess.)

(edited to add 12 thread runs of bzip3 and remove superfluous filenames)


Since I only have 12 perf cores on this Mac, I tried the xz test again with 12 threads.

    xz -T12 -9 -k      993376236 69.0% c=83 d=9
~10% faster for compression with the same size output.


That d=9 sure wins the day there, for me.


Additional benchmarks on the same dataset:

    uncompressed               - 19'291'709'440
    bzip2 -9                   -  3'491'493'993 (sanity check)
    zstd -16 --long            -    593'915'849
    zstd -16 --long=31         -    122'909'756 (requires equivalent argument in decompressor due to needing ~4GB RAM)
    zstd -19 --long            -    505'728'419
    zstd -19 --long=31         -    106'601'594 (requires equivalent argument in decompressor)
    zstd --ultra -22           -    240'330'522
    zstd --ultra -22 --long=31 -     64'899'008 (requires equivalent argument in decompressor)
    rar a -m5 -md4g -s -mt8    -     64'837'044
    
As you notice my sanity check actually has a slightly different size. Not sure why. The benchmark is a bit underspecified because new perl versions were released in the interim. I used all releases up to perl-5.37.1 to get to the correct number of files. Just treat all numbers to have about 2% uncertainty to account for this difference.

I can't provide compression/decompression times, but the --long or --long=31 arguments should not have major impact on speed, they mostly impact used memory. --long=31 requires setting the same in the decompressor, making that option mostly useful for internal use, not archives meant for public consumption.

As you can see, the benchmark chosen by the author mostly comes down to finding similar data that's far away. I wonder if bzip3 can do this better than other algorithms (especially in less memory) or simply chose default parameters that use more memory.

Edit: added more benchmarks


My standard test is compressing a "dd" disc image of a Linux install (I use these for work), with unused blocks being zeroed. Results:

    Uncompressed:      7,516,192,768
    zstd:              1,100,323,366
    bzip3 -b 511 -j 4: 1,115,125,019


Hi, tool author here!

Thank you for your benchmark!

As you may be aware, different compression tools fill in different data type niches. In particular, less specialised statistical methods (bzip2, bzip3, PPMd) generally perform poorly on vaguely defined binary data due to unnatural distribution of the underlying data that at least in bzip3's case does not lend well to suffix sorting.

Conversely, Lempel-Ziv methods usually perform suboptimally on vaguely defined "textual data" due to the fact that the future stages of compression that involve entropy coding can not make good use of the information encoded by match offsets while maintaining fast decompression performance - it's a long story that I could definitely go into detail about if you'd like, but I want to keep this reply short.

All things considered, data compression is more of an art than science, trying to fit in an acceptable spot on the time to compression ratio curve. I created bzip2 as an improvement to the original algorithm, hoping that we can replace some uses of it with a more modern and worthwhile technology as of 2022. I have included benchmarks against LZMA, zstandard, etc. mostly as a formality; in reality if you were to choose a compression method it'd be very dependent on what exactly you're trying to compress, but my personal stance is that bzip3 would likely be strictly better than bzip2 in all of them.

bzip3 usually operates on bigger block sizes, up to 16 times bigger than bzip2. additionally, bzip3 supports parallel compression/decompression out of the box. for fairness, the benchmarks have been performed using single thread mode, but they aren't quite as fair towards bzip3 itself, as it uses a way bigger block size. what bzip3 aims to be is a replacement for bzip2 on modern hardware. what used to not be viable decades ago (arithmetic coding, context mixing, SAIS algorithms for BWT construction) became viable nowadays, as CPU Frequencies don't tend to change, while cache and RAM keep getting bigger and faster.


Thanks for the reply. I just figured I'd try it and see, and the bzip3 results are extremely good. I figured it was worth trying because a fair bit of the data in that image is non-binary (man pages, config files, shell/python code), but probably the bulk of it is binary (kernel images, executables).


Shouldn't a modern compression tool, targeting a high compression rate, try to switch its compression method on the fly depending on the input data?

I have no idea about compression, just a naive thought.


7-Zip can apply a BCJ filter before LZMA to more effectively compress x86 binaries. https://www.7-zip.org/7z.html. Btrfs’ transparent compression feature checks if the first block compressed well; if not it gives up for the rest of the file.



If the focus is on text, then the best example is probably the sqlite amalgation file which is a 9mb C file.


A couple other data points:

    zstd --long --ultra -2:                                1,062,475,298
    zstd --long=windowLog --zstd=windowLog=31 --ultra -22: 1,041,203,362
So for my use case the additional settings don't seem to make sense.


Given that it's BWT, the difference should be the most prominent on codebases with huge amounts of mostly equivalent files. Most compression algorithms won't help if you get an exact duplicate of some block when it's past the compression window (and will be less efficient if near the end of the window).

But here's a practical trick: sort files by extension and then by name before putting them into an archive, and then use any conventional compression. It will very likely put the similar-looking files together, and save you space. Done that in practice, works like a charm.


Handy tip for 7-Zip, the `-mqs` command line switch (just `qs` in the Parameters field of the GUI) does this for you. https://7-zip.opensource.jp/chm/cmdline/switches/method.htm#...


Ooh, that’s neat. How much improved do you get from this? Is it more single or double digit % diff?


To make your comment more useful you’ll want to include compression and decompression time.

Using the results from the readme, seems like bzip3 performs competitively with zstd on both counts.


I've experimented a bit with bzip3, and I think the results in the readme are not representative. I think it's a handmade pick, with an uncommon input and unfair choices of parameters. And it's made with a HDD, which skews the results even more.

For instance, with a 800 MB SQL file, for the same compression time and optimal parameters (within my capacity), bzip3 produced a smaller file (5.7 % compression ration) than zstd (6.1 % with `--long -15`). But the decompression was about 20× slower (with all cores or just one).

I'm not claim my stupid benchmark is better or even right. It's just that my results were very different from bzip3's readme. So I'm suspicious.


Also the compression levels..


I believe the compression levels are included in the list above.


Not for zstd or lzma


Added, thanks.


A 4x improvement over lzma is an extraordinary claim. I see the author has also given a result after applying lrzip (which removes long-range redundancies in large files), and the difference isn’t so great (but bzip3 still wins). Does the amazing result without lrzip mean bzip3 is somehow managing to exploit some of that long-range redundancy natively?

I’d be astonished if such a 4x result generalized to tarballs that aren’t mostly duplicated files.


Currently running my own benchmarks, my preliminary results are that zstd becomes competitive again once you add the --long option (so `zstd --long -16 all.tar` instead of `zstd -16 all.tar`). Which is an option that not everyone might be aware of, but whose usefulness should be intuitive for this benchmark of >200 very similar files.


I'd argue that's actually the lowlight of the README since that is a very poor choice of benchmark. Combining a multitude of versions of the same software massively favors an algorithm good at dealing with this kind of repetitiveness in a way that will not be seen in typical applications.

The "Corpus benchmarks" further down in the README are IMHO much more practically relevant. The compression ratio of bzip3 is not significantly better, but the runtime seems quite a bit lower than lzma at least.


In Linux source benchmark results are interestingly more equal, LZMA still holding up well.

What makes Perl source benchmark special? Deduplication?


An old friend use to say that Perl is line noise that was given sentience.


This is the source - which is probably C.


It's 246 C files and 3163 perl files


Why -T12 for zstd and T16 for xz? How many threads is bzip3 using?


From the source, it looks like bzip3 defaults to 1 thread if not explicitly set by arguments.


...using zstd level 16, when zstd goes up to 22. And without turning on zstd's long-range mode.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: