There are 3 filesystems that can do transparent compression: NTFS, ZFS, btrfs. One is for Windows, the other isn't available for RedHat/CentOS, the third is getting sunset. XFS could really benefit from compression support.
ZFS is available in RHEL/CentOS. Not as a first-party option, but thanks to kABI that Red Hat provides, it is provided by the ZoL project also in a form of a binary kernel module, ready to go, without having to fumble around with DKMS and compilers.
Btrfs is being obsoleted by RHEL only, because they have XFS expertise, but not Btrfs. No need to split limited resources. Other distributions will continue with development. SuSE doesn't intend to stop.
The general mix has some potential for compression.
Looking at one machine available:
* / has 2.08 compression ratio,
* /var/cache 2.15
* /var/tmp 1.0
* /home 1.02
* /srv 1.10 (few web apps, pgsql instance for these web apps, svn repo, samba shares, local prometheus store).
If your server uses spinning rust, you can also increase the I/O by using compression. You are trading the CPU time for I/O bandwidth. Depending on your workload, it may be a sensible trade-off.
I was thinking more in terms of application data, thank you for giving me this perspective.
But still wondering: does the amount of this data really warrant compression? I mean the smallest sensible size of an SSD is around 100GB and it has lots of performance.
In this case, the svn repos and samba shares are way over 100GB and even with classic HDDs, the machine can saturate the network and still be basically idling. No need for SSDs, classic drives were more cost effective.
The compression was just something nice to have. Lz4 in filesystem is basically for free.
Logfiles. ZFS with lz4 has given me compression ratios of over 100× with /var/log, due to the huge amount of repetition.
My first test of PostgreSQL on ZFS was also quite instructive. lz4 again achieved respectable compression (>10× for my datasets) and improved throughput several fold (with no other tuning!).