Hacker News new | past | comments | ask | show | jobs | submit login
Real-time dreamy Cloudscapes with Volumetric Raymarching (maximeheckel.com)
124 points by drux on Oct 31, 2023 | hide | past | favorite | 11 comments



Great job explaning the whole process! I've been building some similar stuff recently for my procedural space-exploration side project (https://www.threads.net/@mrsharpoblunto/post/CufzeNxt9Ol). I was planning to do a dev blog post writing a lot of the details up, but yours covers most of the tricks :)

A couple of extra things I ended up doing were:

1) Using a lower-res texture to modulate the high-res raymarched density, this gives you control over the overall macro shape of the clouds and allows you to transition between LOD better (i.e. as you move from ground level up to space you can lerp between the raymarched renderer & just rendering the low-res 2D texture without any jarring transition.

2) Using some atmospheric simulation to colorize the clouds for sunrise/sunset. To make this performant I had to build some lookup tables for the atmospheric density at given angles of the sun.

3) Altering the step size of the raymarch based on density, I took this from the Horizon Zero Dawn developers where they have a coarse raymarch and as soon as they get a dense enough sample they step back and switch to a higher res step until the density hits zero again.


very nice tutorial. If I can make one suggestion is to write the shaders in a GLSL file format (i.e.: .vs, .fs) then just get the text value from it; that way you can get your editor to highlight the GSLS syntax, which is pretty helpful when dealing with complex shaders.

this not, by any means, the best way of doing it, but I felt it was a bit easier writing GLSL code.

https://github.com/victorqribeiro/3Dengine/blob/master/js/Sh...


> .vs, .fs

Is that like a Unity or Unreal standard? The thing I'm familiar with is a single .glsl with a void fragment() and void vertex() etc. in it


When you’re writing graphics engines (at least from my experience in OpenGL) you tend to write the vertex and fragment shaders in different files and link them when you actually need to render something. That way you can use combinations of them. The vs or fs extension is used to understand which one it is at a glance.


While I was trying to learn computer graphics I came across those extensions. I find them easy to understand, cause you know what is in each file and your editor can understand the syntax, making it easier to spot mistakes, like `1. instead of 1` when the program expects as float.


The hand tuned magic numbers are disconcerting[1]. Couldn't the spectral distribution be derived from real-world cloud data?

1. yes I know it's common practice.


This was the best post on Hacker News in 2023.

- Technical

- Beautiful

- In-depth

- Well written

- With passion


- Interactive

- Works well on mobile devices


I know of raycasting, raytracing, and now raymarching. Are these just attempts to find unique names for “doing something by projecting rays” or do they in some way I can’t see effectively and uniquely describe each technique?


"raytracing" is quite an overloaded term these days unfortunately, but conventionally:

"raycasting" is sending a virtual ray out from a start position to find the next solid surface (ignoring volumes/volumestacks for the moment). (Occlusion/shadow rays can just check for occlusion without finding the closest intersection).

"raytracing" is doing multiple series of "raycasting" events, either in "traditional" Whitted raytracing where you bounce off reflections and through transparent materials recursively, or using BSDFs for outgoing directions in pathtracing for integration. Essentially after one "raycast" event, you do something with the result, and then based off that result, you do another "raycast" even to find the next "intersection" in the scene, starting from the new position you just found.

"raymarching" sort of depends whether you're doing Signed Distant Field intersection or volumetric integration, but to an approximate description, you're basically repeatedly running the intersection test ("raycasting") from multiple "virtual" 3D positions in scene space to evaluate the presence of something which can not necessarily be exactly evaluated with a single intersection: i.e. a fractal's limit surface, or a heterogeneous (different values per voxel) volume.


Beautiful. Nice work Maxime.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: