Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ray Tracer Sandbox in Vulkan (github.com/zielon)
120 points by wyldfire on Nov 6, 2020 | hide | past | favorite | 51 comments


Perfect timing! I just got a new machine with a Quadro RTX4000 card, and I've been looking for example code to see what it can do.

It still blows my mind that Mandelbrot fractals (and now ray tracing) that would take 10s of minutes to render at 640x480 on my first computer, now render at 60fps on a 4k display with real-time panning/zooming.


It is true that hardware became very fast.

But is also amazing raytrace algorithms improved over time. So both hardware and software make real-time raytracing possible.

A very nice 'guide' for papers that make software improvements possible is the YouTube channel 2 minute papers.


Don't Mandelbrot fractals take longer the more you zoom in, because more iterations are needed to distinguish detail? (IIRC correctly from when I played with implementing them)


Could you "shortcut" it by taking advantage of the fractal nature?


Not really. They’re self similar, not self identical, so determining detail takes increasingly more computation as you zoom in.


Off-topic, is there any mid-level 3D API that I can plug vertices into directly? Everything that I see is either super-low level (Vulcan, OpenGL) or super-high level (Unity).


Like it or not, the answer is probably OpenGL. If you go with version 3.3 (a good choice in my opinion) and then it's really only a few function calls and you're submitting vertices, and it's going to be supported on any platform. Use glfw and you don't need to worry about platform stuff (creating windows is a pain otherwise).

Counting the lines of a renderer I have lying around it's 300 lines (including much whitespace). That's very full functioned, it correctly positions the window in the center of the monitor, handles errors properly, takes mouse and keyboard input, sets up the view/projection matrices, uses shaders, etc.


And just for you, I pulled some very ancient code of mine (2012) and stripped it down enough so that it renders a single triangle (and you can move around), nothing more.

https://github.com/jleahy/gldemo

That's a full OpenGL setup, with all the fiddly bits taken care of. Obviously it's a bit over the top (you don't need vertex buffer region management for one triangle), you don't need SIMD matmul, but that's the maximum.

You could port that to any language with OpenGL bindings.


Agreed. And despite being "deprecated", the "fixed-function" OpenGL 1.x API is still implemented on all major platforms (for now?), and it's even more high-level (although your graphics may look somewhat dated). Plus you can mix/match API levels (I think), so you can start with 1.x and move some things to 3.x/4.x style OpenGL when you need shaders. I wish Vulkan wasn't a huge steep learning cliff in comparison.


It'll be there forever (compatibility), but the problem is there's not a lot of overlap between 1.x and 3.x/4.x. I think people are better sticking to the 3.x core profile. Shaders just aren't that hard and fixed function is a really outdated way of thinking.

Generally I'd also say stick with 3.x (rather than 4.x) unless you need tesselation shaders.


I believe with the latest versions of the APIs, the fixed function pipeline stuff is now emulated in the programmable pipeline, but the two don't mesh well, so you end up with particularly bad performance. I've had to find and install the old DX9 runtime to get some older games to run at reasonable frame rate.


Not Android and iOS (edit to add: and web!) if you consider those major platforms for your purposes. There you need OpenGL ES which doesn’t include the fixed-function pipeline.


Shameless plug, check out sokol_gfx.h (and maybe sokol_gl.h, which is a simple GL 1.x style API on top of sokol_gfx.h):

https://github.com/floooh/sokol


wgpu and bgfx might be good options. Creative coding frameworks might be work looking at also (Processing, Cinder, Three.js, etc). Also lightweight "engines" like Sokol, Oryol, Raylib and Macroquad might be good options. There are a lot of options just not well known. Someone wanting the perfect high level but not too low level graphics API, where high and low are kind of arbitrary (don't want an engine, no or minimal boilerplate, most work done with code instead of a editor/gui, etc) is a common theme.


I'm in kind of the same boat. I grew very disatisfied with Unity earlier this year and began looking for alternatives. I explored writing my own rendering engine, given that my requirements aren't very high.

Back in January, I discovered this really cool .NET Standard 2.0 library for abstracting Direct3D 11, Vulkan, Metal, OpenGL 3, and OpenGL ES 3 called Veldrid: https://github.com/mellinoe/veldrid

The documentation is pretty good, for its own parts, and it has a fair number of examples for setting up things like different windowing libraries. I was able to put together a set of code for a single demo running in Windows Forms, WPF, and Xamarin Forms fairly easily. It also has support for SDL2.

Currently, I'm going through a Codevember exercise where I teach myself WebGL from scratch.

From what I've learned so far, most of the graphics APIs these days work in very similar ways. And Veldrid polishes over the few differences (esp. in the case of OpenGL). WebGL does, too, in that they present an OpenGL front-end, but the back-end can be implemented in different graphics APIs (for example, on Windows it's actually implemented in D3D 11 through ANGLE).

In general, you need to create a Shader Program--which is a combination of multipe Shaders of different types, e.g. Vertex, Compute, and Fragment--construct one or more Buffers to which you will load data (generally in a big, ol' smash of data), and configure how ranges within those Buffers will map to pointer locations within your Shader Program.

However, I've generally found that there is little documentation anywhere on how to architect a data pipeline to use all of the various GPU resources efficiently. Everyone talks about "use as few draw calls as possible". But they don't really tell you how to achieve that.

My feeling is that a Shader Program is loosely analogous to a Material in something like Three.js or Unity. I'm guessing that the ideal approach is to take all of the Meshes in Three.js, or all of the Renderers in Unity, that have all of the same Materials, and then combining their Geometries into a single block of memory. And that's where Uber Shaders come in, as an attempt to also combine all of the different ways in which you'd want to render different Materials into a single Shader Program.


Combining meshes will reduce the cpu usage hugely (assuming you are talking about hundreds of meshes) but you may end up sending lots of offscreen geometry to the Gpu. Depending on your scene this may or may not be an issue. Some engines will do this behind the scenes or use indirect rendering (which gives the speed benefit of combining meshes without you manually combining).

You should also be careful of monolithic Uber shaders especially if they branch as shader divergence can really kill gpu performance on some hardware. There is no one solution, just the one that best fits your usecase, hardware and coding time budget.


Thanks. My requirements are pretty low, but I need to run on some pretty low-spec hardware, with VR multiview. Right now, everything is in WebXR, so perf is about half of native. And if I'm on a non-Oculus standalone headset, the only browser that is available is Firefox Reality, which has some perf issues of its own. There should be enough perf to run everything I need in this setup, especially of I can get some of my loading and texture transform ops running in Workers (again, with Firefox being the pain point here).

But with dynamic content loading over the 'net and teleconferencing and corporate authentication and general Unity update fuckery, it just got to be too much to manage. A lot of my users can't have Facebook accounts, so I need to support as many other headsets as possible, and Unity's tooling for that is way too constantly-in-an-ever-shifting-landscape-of-broken.

With OpenXR starting to really take off, and upcoming .NET 5 promising much more robust multiplatform support, I might be able to go back to .NET with Veldrid. Though I'd still have the teleconferencing and authentication bits being a lot harder to pull off outside of the browser, and having a native app would mean we would have to deploy through the Oculus store, which I definitely don't want to do.


I'm actually working on a library like that for Common Lisp, but it's still very much a work in progress.

My goal is to have a framework where I can experiment with different parts of OpenGL by sub-classing something and then override a method or two, and have it "just work" with everything else in the framework (shaders, viewers, animation, user interaction, etc.)

https://github.com/jl2/newgl/tree/buffer-refactor


For quick 3D visualization, I've used three.js and a minimal HTML page. It has loaders for many common 3D file formats so you start with one of the samples at https://threejs.org/examples/ and hack around until it loads the data that you want.


You may be looking for a 3D rendering engine. There are lots of choices depending on your target platform, preferred programming language, license preferences, requirements for real time vs animation or image production, etc.


https://github.com/google/filament and ThreeJS are probably the closest


You can get started with the oldskool immediate mode OpenGL, which is basically just direct emission of vertices triangle by triangle.


Webgl is honestly a pretty good mid ground between them.


WebGPU may become this


there are two gifs that are very nice, but extremely heavy, almost 80MB together. you might not want to load that on mobile data


Why would they use GIF instead of HTML5 video? Could have significantly reduced file size.


You can't embed videos in Github's markdown.


I had no idea someone wrote a Vulkan backend for ImGUI. Very useful!

https://github.com/Zielon/PBRVulkan/blob/master/PBRVulkan/Ra...


ImGUI has backends for every graphics API in regular use, and some that might be considered deprecated (DX9). Including game engine abstraction layers in Unreal and Unity and console properietrary APIs (not public, of course). Here's a full list of public graphics APIs and windowing systems part of the official release (https://github.com/ocornut/imgui/tree/master/backends). Maybe with the exception of stb_image, it's only of the most integrated open source libraries in games and graphics


it's a mistake to think Dear ImGUI has "backends". Dear ImGUI is a few files that generate a vertex list. It then as a bunch of "examples" that show how to render that list with various APIs. Any slightly competent graphics programmer could take that vertex list and render it in any API. Those Examples are not Dear ImGUI. They are just examples.

This is the code

    for each command list
      upload vertex data
      upload index data
      set scissor
      bind texture
      draw
That's it! Adapt that to any API, whatever your using

Because it's an example and because so many unexperienced programmers use it they assume it's some official part of the library. It's not.


This is some high quality HN pendantry. Sure, you're right they're minimal, but also they're literally in a folder called "backends". Also, if you're active here you'll know a lot of people might be interested in using ImGUI, but also not fit in the category of "slightly competent graphics programmer".


Absolutely not. Like many things you can mis-understand the point. The point of Dear ImGUI is to easily stick it into what you already have. The point is it has no dependencies on any graphics API. You can stick it in Unreal if you want. You just need to be able to draw a list of vertices. That's one of its major features. To gloss over that and think "it only works with stuff in this folder that was renamed from examples to backends" is missing what the library is and isn't. Those are examples of how to make a backend. They aren't Dear ImGUI


You both seem to be right, based on this commit message:

> Moving backends code from examples/ to backends/… 24 days ago


Moving them from examples to backends doesn't change anything. That's likely just a sign that "examples" is now examples of using the API, not examples of integrating with some GPU API.

One of the main reasons/goals/points of Dear ImGUI is being able to ingrate it into existing projects. It's deliberately API agnostic. It generates a very simple command list that you can then trivially render yourself in your own engine.

Everything else, all the platform specific code in the repo is entirely there to help people understand how to use it or to get started.

It's a mistake to think otherwise because it incorrectly limits Dear ImGUI's perceived usefulness. If you believe you have to take a backend or use one of the APIs provided then you'll mistakingly believe it would be hard to put in your existing game. If you understand what it's really about then it's trivial to put it in almost anything. That is by design.


Hi gregg :)

I think it's a bit of both. From the point of view of game programmers and custom engine users/creators, it's important to know that Dear ImGui will be easy to integrate in whatever odd-custom tech they may have, and I am on the watch to hold that guarantee around, and to keep communicating it.

At the same time, effectively probably 90% of homebrew custom engines not build in a professional context are build over common known technology, Win32 API, SDL/GLFW, DX11 etc. And it makes sense to provide the ready-to-use glue to ensure you can integrate Dear ImGui is most apps with <20 lines of code. It's even more meaningful to have that possibility when you are bootstrapping a new projects or are wholly unfamiliar with programming or graphics technology.

The feature scope of Dear ImGui has grown meaningfully over the years, and although the only "required" elements are still mouse pos/buttons, time and rendering those vertices with scissoring, in reality there are many other desirable things which adds up to more work to provide (clipboard, keyboard, gamepads, mouse cursor shape, IME hooks, dpi queries, not mentioning multi-viewports).

You can see my reasoning for renaming examples/ to backends/ here: https://github.com/ocornut/imgui/issues/3513 If anything, I found the tendency of my-weekend-engine people to do their first foray into Dear ImGui often improductive and detrimental to the perception of using Dear ImGui. They tend to struggle 4 hours to get a feature-incomplete backends done instead of first plugging fully working existing backends with <20 lines THEN considering if they really need to rewrite that. I believe that did more bad than good to Dear ImGui because they end up using feature incomplete versions. It's also that years fixing those backends to try to have them work everywhere has been helpful in growing the confidence of calling them backends.

I believe this is mostly communication issue: we should keep hammering that backends are reasonably easy to rewrite and keep documenting that process.

Ref: https://github.com/ocornut/imgui/blob/master/docs/BACKENDS.m...


I had different feelings about the examples. Seeing example integrations into so many rendering apis reassured me that it would be trivial to add Dear Imgui in to my own projects. Without the examples you have to take it on faith that they designed their api to be easily integrated.

Many libraries claim to be cross platform and turn out to be hugely painful to use on anything outside of a few intended targets. I still have nightmares about Scaleform!


The only outliers who didn't adopt Vulkan today are MS, Sony and Apple. MS and Apple have very lock-in minded people among those who decide this.

Not sure what stops Sony from doing it.

Things went better with OpenXR.


Among several embedded OEMs and the whole CAD/CAM and scientific industry focused on OpenGL/DirectX, without any patience to deal with low level idiosyncratic of Vulkan and its increasing amount of extensions every month, which is why Khronos decided to try to advocate Vulkan to them via ANARI.

Let see if it as successful as OpenCL 2.x that had to reboot back into OpenCL 1.2 for version 3.0.

OpenXR doesn't define the 3D APIs, it is just device management, one can use it with whatever APIs one feels like it.

And then there is Hollywood switching to CUDA based render farms, like OTOY, because Vulkan Compute doesn't deliver what they need.


Everyone says Nintendo supports Vulkan. But that wasn't actually the case outside of marketing materials in the past, and nobody has confirmed that this has changed since.


The Switch went through Vulkan 1.0 conformance testing, and then again for Vulkan 1.1 and 1.2

https://www.khronos.org/conformance/adopters/conformant-prod...

There's also a platform integration extension attributed to NVIDIA and Nintendo

https://www.khronos.org/registry/vulkan/specs/1.2-extensions...

Seems unlikely they would go to that trouble if Vulkan weren't actually exposed to developers


So who does that leave? Linux, Android and Stadia? And maybe Nintendo? There is about as much not supporting it as are.


Windows 7 to Windows 10 also have vulkan support.

You can use vulkan on Mac and iOS using MoltenVK.


Everything except these outliers. I have no doubt lock-in will die out in time there as well.


You've failed to mention what those are. It seems to me to be the case that "these outliers" are in fact the rule.

As a rendering engineer, I have lots of doubt that those vendors will move away from their proprietary APIs, they provide significant advantages, and from anecdotes from around the industry, I think this state is preferred by engineers in AAA development, which I think is where much of the forces in the industry come from.


.


They already had a low level graphics API for ps4, GNM, before Mantle. In fact, EA DICE's Johan Anderson who did most of the groundwork for mantle, said he got his ideas from working with console APIs. At the time he was working on the mantle spec it probably would have been the xbox 360 or ps3 APIs or an early form of the ps4 API. The X1 didn't launch with a low level API, but they provided one some time before 2014 according to a Metro Redux dev


AMD also makes chips for Xbox, so that isn't really a reason for them. I haven't seen Sony saying they want Vulkan to succeed. Like MS with DX12 and Apple with Metal, they are so far stuck with their GNM.


Sony already has several low-level APIs, much easier to use than Vulkan and precede it.


Are you talking about gnm, gnmx[0] ?

[0] https://en.wikipedia.org/wiki/PlayStation_4_system_software


Yep.

It is kind of DirectX 12 like, with a shading language similar to HLSL.

Also Switch is always referred as an example of Vulkan support, they also do OpenGL 4.6, but what really matters for native titles on the platform is NVN.


And I'm not sure that it actually has Vulkan support. Can confirm that OpenGL is at least technically supported.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: