Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What is a G-Buffer for non game developers?


There are two types of rendering now in common use for games.

In forward rendering you allocate one RGB color buffer for the screen, then rasterize triangles to this buffer, directly producing final RGB color values at each pixel.

In deferred rendering instead of one RGB color buffer you allocate a set of screen-sized buffers to hold various attributes of your choosing for each pixel, which can include colors but also things like the surface normal, material type, roughness, velocity, etc. These buffers are collectively referred to as the G-buffer. When you rasterize triangles you fill in all the attributes for each pixel instead of just a final color. Then in a second full screen pass, for each pixel you read all the surface attributes you wrote earlier and combine them with other data such as the positions of nearby lights, do some calculations and output the final color.

By deferring lighting calculations and separating them from the geometry rasterization, deferred rendering can offer more flexibility and performance in some cases. However it is tough to use with MSAA and/or transparent objects, and can be a memory bandwidth hog. Modern rendering is all about tradeoffs.


3D objects are drawn using 3D triangles projected and interpolated on to the 2D screen (2D array of colors).

Classically, you draw the whole triangle, start to finish in one go. Interpolate the triangle, look up any colors from textures, do the lighting math, write it all out. Simple and straightforward.

For complicated reasons, it often actually works out better for current games on current hardware (games/hardware of the past decade) to defer the lighting math and do it in a separate pass. So, interpolate the triangle, look up the textures, write all of the inputs to the lighting math (other than the lights themselves) to a multi-layer 2D array of parameters (surface color, normal, roughness, etc...). Then later determine which lights affect which pixels and accumulate the lighting using only the 2D parameter arrays --without needing the triangle information.

The multi-layer 2D arrays of material parameters is called the "G-Buffer". I think it originally stood for "Geometry Buffer". But, the name has taken on a life of it's own.


It stand for geometry buffer. It looks like none of the replies led with that. If you render positions into a pixel buffer and normals into another pixel buffer, you can shade the pixels and avoid shading lots of fragments that will be hidden. It gets more complicated obviously (material IDs, reflection roughness, etc.) but those are the basics.


> If you render positions into a pixel buffer and normals into another pixel buffer, you can shade the pixels and avoid shading lots of fragments that will be hidden.

Avoiding unnecessary shading is only one small reason to use a G-buffer - you could just do a depth pre-pass for that. The bigger advantage of deferred shading is that it gives you a lot more flexibility with lighting, letting you decouple lighting complexity from screen complexity. This is especially useful with many tiny lights that only affect a small portion of the screen.


I didn't say it was the only reason, but it is a big reason.

Even if you use a depth map to exit early on fragments you are still going through all your geometry twice, once to get your depth map and again to do all your shading.

The history of renderman (prman) went in a similar same way, though not through using actual g-buffers. Originally it would shade and then hide because of memory constraints and displacement of micropolygons happening in the surface shader and not in a separate displacement shader. Eventually the architecture shifted to try to do as much hiding as possible and split out displacement shaders into a separate type.

Another thing that can be done with g-buffers is using mip-mapping and filtering of the g-buffer to do some sampling or effects on lower resolutions. Pixels can also be combined into distributions instead of just averages, though I'm not sure how much this is done currently.


It is a component of a deferred shading rendering pipeline.

"A screen space representation of geometry and material information, generated by an intermediate rendering pass in deferred shading rendering pipelines." https://en.wikipedia.org/wiki/Deferred_shading

As an aside, I worked in visual effects for film for years and never heard the term g-buffer. I am sure we had some other term that we thought was common, but was only used inside our studio.


AOVs, render passes, whatever. What they're doing in the final pass in games is fancy comp with all of that. We've played with re-lighting in comp as well. You ought to be familiar with that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: