Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Unless someone comes up with a better solution (ie: smaller)

Be careful what you wish for! If you care about smaller above all else, there are much better compression schemes nowadays.

NNCP is on the range of practical, and beats LZMA: https://bellard.org/nncp/

If you really want to go crazy, large language models like BLOOM can be repurposed for compression; the Chinchilla paper lists a 0.3 bit-per-byte compression ratio on GitHub code.

Of course, the cost is in GPU hardware, or in time.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: