Hacker Newsnew | past | comments | ask | show | jobs | submit | big-nacho's commentslogin

Very cool stuff!


Two things: 1. I like the natural feel it gives to images. 2. Easy to control. As much as it's super easy to cook up a matrix based error diffusion dither (like Floyd), there are a lot of things to take care of to reduce artifacts and bad side effects.

I also generally want to take a bit more time with the dithering topic and explore other methods too, which hopefully I'll add in the future.


Thanks for answering! I agree that in the one example image in the readme the output looks quite natural. In other implementations I've seen I could always notice "ghosts" of the Hilbert curve in the resulting image, as a subtle edge artifact. So that turned me off of the algorithm a bit, even though I find it a very elegant approach to dithering. Usually the other images were 1 bit though, that might have been a factor.

On the note of matrix based error diffusion and exploring other methods: maybe you'd enjoy Victor Ostromoukhov's variable coefficient dithering paper[0]. Instead of one diffusion matrix, it has different diffusion matrix depending on the value of the input pixel, and the result is a much more blue noise-like dithering.

Given that his paper is almost a quarter century old I've been wondering if we could find better matrices using modern solver algorithms on today's hardware. I've never used a solver myself so wouldn't know how to set this up though.

Also, there's Zhou-Fang dithering, which takes Ostromoukhov's algorithm and introduces a little bit of randomness to remove artifacts[1]. I have JavaScript implementations for both algorithms in an Observable notebook if you want to try them out[2]. It's limited to 1-bit output though.

[0] https://perso.liris.cnrs.fr/victor.ostromoukhov/publications...

[1] https://dl.acm.org/doi/abs/10.1145/1201775.882289

[2] https://observablehq.com/@jobleonard/variable-coefficient-di...


GIF rendering is a big use case.

pngquant was a big comparison subject during development (it's a brilliant piece of work, and a mature tool that does a few things more than just quantizing). Take it with a grain of salt of course, but in terms of raw quantization performance, patolette had the edge, particularly when dealing with images with tricky color distributions. With that said, pngquant's dithering algorithm is way more sophisticated (and animation aware, I think). In fact, one thing where it really shines is that it spots with pretty good precision where adding noise would actually hurt instead of helping.

Another thing is that patolette can quantize to both high and lower color counts (the latter particularly with CIELuv), whereas pngquant is more well suited for high color counts.


Yeah, good point. I'll definitely add an image showing the saliency map tradeoff.

Regarding full examples, because some other projects seem to have cherry picked cases where they perform very well, I wanted to go for a "try it out yourself" approach, at least for now. Maybe in the future I'll add a proper showcase. Thanks for the feedback :)


take a look at my demo [1]. i'd love to see the same set of images quantized with your thing.

the quant frog [2] is interesting since it has an artificial single pixel line at the top with a ton of colors to trip quantizers up.

[1] https://github.com/leeoniya/RgbQuant.js/

[2] https://github.com/leeoniya/RgbQuant.js/blob/master/demo/img...


Something that caught my eye is that it seemed to be a kind of "controlled" K-Means. One problem with K-means is that it's too sensitive to the initial state. You can run it multiple times with different initial states or use fancy initialization techniques (or both) but even then nothing really guarantees you won't be stuck with a bad local optimum. Another thing was that the guy that wrote the paper also authored an insanely high quality method the year before and claimed this one was better. Not seeing any available implementations I wondered how good it actually was.

The optional K-Means step just grabs whatever palette the original method yielded and uses it as initial state for a final refinement step. This gives you (or gets you closer) to a local optimum. In a lot of cases it makes little difference, but it can bump up quality sometimes.


Gamma aware operations happen in the C code. The python code you're referencing is just changing the scale of color intensities. What you shouldn't do is liberally add up sRGB colors, take averages and generally do any math on them unless you're aware of the non-linearity of the space.


Will take a look, thanks for sharing! This does not implement Wu v2 but a different method he published a year later though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: