Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Spectral rendering, part 2: Real-time rendering (momentsingraphics.de)
59 points by todsacerdoti 19 days ago | hide | past | favorite | 18 comments


Apparently (from a layman's perspective) the difference between conventional RGB ray tracing and spectral ray tracing is this:

RGB assumes all light sources consist of three RGB lights, where the brightness of red, green, and blue varies. E.g. a yellow light would always be a red and a green light.

In contrast, spectral rendering allows light sources with arbitrary spectra. A pure yellow light (~580 nm) is different from a red+green light.

The physical difference is this: If you shine, for example, a pure yellow light on a scene, everything looks yellow, just more or less dark. But if you shine a red+green (impure yellow) light on a scene, green objects will be green and red objects will be red. Not everything will appear as a shade of yellow. Conventional RGB rendering can only model the latter case.

This means some light sources, like high-pressure sodium lamps, cannot be accurately rendered with RGB rendering: red and green surfaces would look too bright.

(Also note that the linked post has also a part 1 and 3, accessible via "next/previous post" at the bottom.)


> RGB assumes all light sources consist of three RGB lights

Another way to say this is that conventional 3 channel renderers pre-integrate the spectrum of lights down to 3-channel colors. They don’t necessarily assume three lights, but it’s accurate to say that’s the net effect.

It’s mostly just about when you integrate, and what you have to do when you delay the integration. It’s kind of a subtle distinction, really, but rendering with spectral light and material data and integrating down to RGB at the end more closely mimics reality; the cones in our eyes are the integrators, and before that everything is spectral. Or more accurately, individual photons have wavelength. A spectrum is inherently a statistical summary of the behavior of many photons.


I guess it mainly makes a difference for light sources that are very yellow but not very red and green (sodium lights) or very cyan but not very green and blue (no realistic example here). Considering that actual sodium lights are being largely replaced by white LEDs, which can be modeled quite well with RGB ray tracing, spectral rendering might not offer a significant advantage for most applications.


Yeah exactly. Spectral isn’t often giving you a very different result from RGB. It rarely matters for entertainment rendering like films & games, but it’s useful for scientific predictive rendering.

Sodium lights are a problem not because they’re yellow, but because they have very spiky spectra. Smooth spectra, whether it’s lights or materials, will tend to work fine in RGB regardless of the color.


> Sodium lights are a problem not because they’re yellow, but because they have very spiky spectra.

A side effect of this (and other low-CRI lights) is that it's hard to take pictures in them, because if you take a picture of a person you want their skin tone to look just right or else they look weird and sickly and unattractive.

Regular white balance algorithms are not quite able to handle this. So you might imagine why phone cameras are motivated to do AI processing or people processing or other things that make the picture look overprocessed. Because the people are temporarily literally the wrong color in that lighting, and an AI model may be capable of knowing what color they "actually" are.

(That said, the main reason photos look overprocessed is that for some reason nobody on Earth ever implements sensible sharpening algorithms. They always use frequency-based ones that cause obvious white halos. Learn about warp-sharpen and median filters, people.)


> very cyan but not very green and blue (no realistic example here)

Very high temperature blackbody radiation perhaps?


It also becomes important for rendering glass and other highly refractive substances. Some conventional RGB rendering engines can mimic dispersion, but with spectral rendering you get it "for free."


One issue with Hero Wavelength sampling (mentioned in article) is that because IOR is wavelength-dependent, after a refraction event, you basically lose the non-hero wavelengths, so you get the colour noise again through refraction.


You would still need to provide extra logic/data to do dispersion/refraction curves for materials, it's hardly "for free"


The only applications I'm aware of that currently do spectral rendering on the fly are painting apps.

I have one called Brushwork ( https://brushworkvr.com ) that upsamples RGB from 3 to a larger number of samples spread across visible light, mixes paint in the upsampled space, and then downsamples for rendering (the upsampling approach that app uses is from Scott Burns http://scottburns.us/color-science-projects/ ). FocalPaint on iOS does something similar ( https://apps.apple.com/us/app/focalpaint/id1532688085 ).

I'm happy that tech like this will open up more apps to use spectral rendering.


Not a ton of info on it, but ran into this graphics of a galaxy, all rendered with some form of spectral rendering. Thought it was super cool. https://bsky.app/profile/altunenes.bsky.social/post/3m5z6vr2...

Distantly reminds me the decades I spent with galaxy as my xscreensaver. https://manpages.debian.org/jessie/xscreensaver-data/galaxy....


I was sure it must have been invented already! I've been trying to look for this idea without knowing it's called "spectral rendering", looking for "absorptive rendering" or similar instead, which led me to dead ends. The technique is very interesting and I would love to see it together with semi-transparent materials — I have been suspecting for some time that a method like that could allow cheap OIT out of the box?


I’m not sure carrying wavelength or spectral info changes anything with respect to order of transparency.

It seems like OIT is kind of a misnomer when people are talking about deferred compositing. Storing data and sorting later isn’t exactly order independent, you still have to compute the color contributions in depth order, since transparency is fundamentally non-commutative, right?

The main benefit of spectral transparency is what happens with multiple different transparent colors… you can get out a different color than you get when using RGB or any 3 fixed primaries while computing the transmission color.


The main benefit I see is being able to more accurately represent different light sources. This applies to transmission but also reflectance.

sRGB and P3, what most displays show, by definition use the D65 illuminant, which approximates "midday sunlight in northern europe." So, when you render something indoors, either you are changing the RGB of the materials or the emissive RGB of the light source, or tonemapping the result, all of which can approximate other light sources to some extent. Spectral rendering allows you to better approximate these other light sources.


Whether the benefit is light sources or transparency or reflectance depends on your goals and on what spectral data you use. The article’s right that anything with spiky spectral power distributions is where spectral rendering can help.

> sRGB and P3, what most displays show, by definition use the D65 illuminant

I feel like that’s a potentially confusing statement in this context since it has no bearing on what kind of lights you use when rendering, nor on how well spectral rendering vs 3-channel rendering represents colors. D65 whitepoint is used for normalization/calibration of those color spaces, and doesn’t say anything about your scene light sources nor affect their spectra.

I’ve written a spectral path tracer and find it somewhat hard to justify the extra complexity and cost most of the time, but there are definitely cases where it matters and it’s useful. Also there’s probably more physically spectral data available now than when I was playing with it. I’m sure you’re aware and this is what you meant, but might be worth mentioning that it’s the interaction of multiple spectra that matters when doing spectral rendering. For example, it doesn’t do anything for the rendered color of a light source itself (when viewed directly), it only matters when the light is reflected or transmitted through spectra that are different from the light source, that’s where wavelength sampling will give you a different result than a 3-channel approximation.


Conventional RGB path tracing already handles basic transparency, you don't need spectral rendering for that.


Not exactly what parent poster was saying (I think?), but absorption and scattering coefficients for volume handling together with the Mean Free Path is very wavelength-specific, so using spectral rendering for that (and hair as well, although that's normally handled via special BSDFs) generally models volume scattering more accurately (if you model the properties correctly).

Very helpful for things like skin, and light diffusion through skin with brute-force (i.e. Woodcock tracking) volume light transport.


I might be misunderstanding parts of the comment above, although I think it aligns with what I had in mind. Here’s what I meant:

If a ray carries full spectral information, then a transparent material can be described by its absorption spectrum — similar to how elements absorb specific wavelengths of light, as shown here: https://science.nasa.gov/asset/webb/types-of-spectra-continu...

In that view, transparency is just wavelength-by-wavelength attenuation. Each material applies its own absorption/transmission function to the incoming spectrum. Because this is done pointwise in the spectral domain, the order doesn’t matter:

OUT = IN × T₁ × T₂ (or in a subtractive representation: OUT = IN − ABS₁ − ABS₂).

So whether one material reduces 50% of the red first and another reduces 50% of the green second or vice verse doesn’t change the result. Each wavelength is handled independently, making the operation order-independent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: