Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed, green contributes more. Example of the Y channel after blurring r/g/b: https://imgur.com/a/3p15Qe1

And colured versions: https://imgur.com/a/Knq2Ue3

(image source: https://en.wikipedia.org/wiki/Flower#/media/File:Flower_post...)



More, as in 10 times more for green, and 3 times more for red. So it's true that we're pretty blind to blue, just the "focusing" explanation is not correct...


Except that our perception of brightness, like loudness, is logarithmic, not linear.

So even if blue is 1/10th or 1/3rd as strong as another color, it's not really a big deal in terms of sensitivity.


I think you're imagining something like a logarithm for each channel, log(R/3) + log(G) + log(B/10). But that's not how it works.

Instead it's like log(R/3 + G + B/10). So when G and B are about the same size, the effect of B will be negligible. It's only when R and G are small that the logarithm will kick in and let you see detail using the blue.

So for seeing detail in a normal picture, blurring the green will have a much larger effect than blurring the blue. But entirely removing the green would let it still look sharp, because we could see the detail using the red (or if we removed that, then blue). If we just blur the green it overwhelms the blue and red to make the picture look blurry even if the blue and red were enough to provide sharp detail if the green wasn't there.


To be honest I need to rethink the arguments of the linked article - if we just use coloured filters in front of our eyes which exclude each channel, the image (I think) remains sharp (or at least that happens with red/cyan anaglyph glasses).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: