Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
PortableGL: An implementation of OpenGL 3.x-ish in clean C (github.com/rswinkle)
185 points by kk6mrp on Dec 31, 2021 | hide | past | favorite | 52 comments


Not entirely related to the subject of OpenGL, but I really like how the author has decided to lay out this project. It's pretty hard to beat the convenience of a single header (or single header/single source) distribution for C libraries, but library development gets progressively harder as the project gets bigger as more code is added to the (usually hard to navigate) header file. Here, the author does their development with multiple files as one normally would, but when a new version is released they run the generate_gl_h[1] script that concatenates everything into a .h file for distribution. Simple yet flexible! This is also how SQLite[2] distributes its builds. It's a pattern that I'm using myself in some unreleased projects.

[1] https://github.com/rswinkle/PortableGL/blob/master/src/gener... [2] https://www.sqlite.org/amalgamation.html


I recently discovered this pattern in cgltf (https://github.com/jkuhlmann/cgltf), and I agree, it makes deployment so much nicer.


I just love the "Dependencies: None" bits so much! I prefer this as opposed to 900 dependencies.


the original stb libraries did the same. single header doesn't mean you have to develop in one file, although you totally can.


Just because I was curious, I did a little digging: A cursory look at Sean Barrett's website on the Internet Archive lists Dec. 2006 as the first date where stb_image was published (but looking at the file, versions go back much further than that)

https://web.archive.org/web/20061205030342/http://nothings.o...

That means for 15+ years, he's been iterating on the same file. It certainly works as a development style, but I personally feel that it's easier to edit a set of smaller files.


Original forum thread suggesting the first release was in Nov 2006: http://web.archive.org/web/20120419172714/https://mollyrocke...


So reading this:

> [Vulkan] really is overkill for learning 3D graphics...using modern OpenGL to introduce all the standard concepts, vertices, triangles, textures, shaders, fragments/pixels, the transformation pipeline etc. first is much better than trying to teach them Vulkan and graphics at the same time...

...got me wondering. I haven't really kept up with modern graphics programming, but my - possibly incorrect - understanding was that not only is Vulkan lower-level than OpenGL, it was also motivated by lots of questionable design decisions that OpenGL was kind of stuck with. So, with the above quote in mind, is it the case that:

* Vulkan is good (?) but too low-level for most users.

* OpenGL is a good environment to learn or do nuts & bolts 3D programming in - i.e. it's the right level of abstraction for that - but is kind of a mess.

Is that correct? Is there anything that splits the difference? How's Direct3D these days? Metal? Has anyone built a library on top of Vulkan, targeted roughly around the abstraction level of OpenGL, but with a better design?


As a professional graphics programmer at a well-known game studio, who has taught and mentored a lot of people: Direct3D 11 is quite good. D3D12 is a bit more like Vulkan, but not too far in that direction.

In order of recommendation for learners, just rating each API on its own merits without considering other factors like portability: D3D11, Metal, WebGPU, D3D12, OpenGL and Vulkan.

OpenGL and Vulkan score last place for similar and opposite reasons: both their APIs are a mess, the debugging and tooling story is still underdeveloped. But while OpenGL has backwards API design that often gives users a wrong impression about how GPUs work, Vulkan instead is much more annoying, requiring the user to cross their T's and dot their I's, while still not giving good guidance towards how they should be structuring their rendering code. I am also eternally annoyed at things like VkFramebuffer which should have never existed (yes I do know it is not required in the latest version).

I would rank WebGPU higher (disclaimer: I contribute to the WebGPU spec) since the API is a great intermediate, except WGSL is too new and too underbaked for me to seriously recommend it right now. The native implementations Dawn and wgpu are committed to supporting SPIR-V input, which is how I would recommend using it right now.

Happy to answer questions or elaborate on this.


Your Windows bias is showing. :)

Personally, I find the Vulkan API to be one of the best designed APIs ever, and the debugging tools on desktop are okay. On Android, I'd say the Vulkan tools are, in fact, quite excellent. I can't compare them to iOS though.

The problem with Vulkan and DirectX 12 (which I find to be so close as to be indistinguishable) has nothing to do with their APIs. The main problem is that "modern" graphics is only tangentially related to rendering triangles--and rendering triangles is normally what beginners really want to do. A lot of beginners also want to render 2D paths--and graphics APIs for that are horrifically bad.

"Modern" graphics is mostly about creating a bunch of low-resolution "geometry buffers" and then scanning every single pixel on the screen only once while building the pixel up from some mathematical combination of those lower resolution buffers. Yes, there are triangles involved at a Platonic level, but many of the triangles are smaller than a pixel. See the Unreal Engine 5 demos: https://www.youtube.com/watch?v=qC5KtatMcUw

It's reached a point that "modern" graphics almost doesn't even want a triangle renderer--it just wants pure access to the massively parallel memory and computation pipeline on the GPU and the ability to smash the results into a framebuffer. Vulkan and DX12 reflect this, so they don't really mesh well with beginners.

(I'm being a bit glib, but there really is a massive chasm between "modern" game rendering techniques and the kind of 2D/3D that GUI/CAD/Application developers want to use.)


> Your Windows bias is showing. :)

Pretty much anyone who works in 3D graphics acknowledges that Direct3D is head and shoulders above OpenGL in terms of ease of use, tooling, and support from GPU vendors, and has been for many years now. Even Carmack has come around to this perspective. Really the only reason to use OpenGL is to target Linux, Android, and formerly macOS -- and since game vendors don't really want to do that, now you know why many games and engines support Direct3D exclusively.

The thing you have to understand about Microsoft is that "Developers! Developers! Developers!" isn't just a meme to them. They really put the developer experience front and center and it shows. It shows in Direct3D, it showed in Internet Explorer back in the day, and it shows in things like Visual Studio Code today. Sometimes the Microsoft stuff is simply better.

> "Modern" graphics is mostly about creating a bunch of low-resolution "geometry buffers" and then scanning every single pixel on the screen only once while building the pixel up from some mathematical combination of those lower resolution buffers.

Pretty amazing how "modern" graphics is a fancier version of Atari 7800 graphics! The 7800 didn't have tiles and sprites as such; rather it had per-scanline display lists specifying locations in RAM where graphic bitmaps were stored; the graphics chip pulled the pixel colors from the memory pointed to by the appropriate display list entry as part of the scanout process.


> Your Windows bias is showing. :)

In a past life, I actually worked on the Linux graphics stack full-time. I'd say my recommendations are based on years of experience and also mentorship.

> On Android, I'd say the Vulkan tools are, in fact, quite excellent.

It's possible my knowledge is out of date, but the last time I tried to debug Vulkan on Android, the official Google Android tool was known as GAPId. It was in beta and it crashed with an error message when I tried to use on our game.

It seems that Google has deprecated it (of course) and replaced it with this: https://gpuinspector.dev/ which says "Coming soon: Take a capture of a single frame to step through and profile each individual draw call.". It doesn't yet look like it's anywhere near the level of tooling provided by RenderDoc, the Xcode tools, PIX, or Razor, but I might be wrong. I admittedly haven't worked on Android in a few years, so I'd love to know updates from the tooling perspective here.

> The main problem is that "modern" graphics is only tangentially related to rendering triangles--and rendering triangles is normally what beginners really want to do.

I'm not sure I categorically agree with that. Vulkan, D3D12, and such all have ample support for drawing triangles, DrawIndexed is still a cornerstone of all the APIs. I'm, of course, very familiar with the latest advancements with Nanite and visibility buffers and all that, but these aren't things that I'd say the modern APIs are explicitly designed for. Nanite infamously uses a persistent compute shader kernel for its culling workers that is currently undefined behavior (but works on all devices, and probably will continue working for eternity).

D3D12's validation layers are much better managed than the Vulkan ones, generally tend to output more helpful messages, and aren't as slow at doing resource tracking.

Vulkan has a lot of dead weight in its APIs, and mistakes that made its way into the spec. Subpasses are over-complicated and do not provide much benefit, even on tilers. Pipeline derivatives are worthless (Qualcomm, who pushed hard for them, found they provided a 2% speedup to pipeline compilation assuming you were using them in a way that only Qualcomm could support, but caching removed the rest of the gains). Push descriptors should basically never be used (they're limited to a single dynamic descriptor, and slow on most IHVs).

These are all things that I've seen beginners stumble into, and I have to spent time explaining why these features just aren't helpful. It's harder for a new beginner to know the best course of action, and which 5 of the 7 features should be used when trying to, say, bind resources to a given draw call. I wish the spec was more active in trying to mark bad ideas as such, but the nature of multi-IHV participation means you won't get that kind of filter.


If at any point you had the motivation to write a quick post or comment detailing your Vulkan recommendations for beginners, I and many others would definitely appreciate it. I've been bitten by exactly the issues you mentioned, and had to figure things out by myself, which was honestly pretty frustrating. I'd be extremely curious to know what other things I could have avoided, since that would make my work a lot more enjoyable!


This is an interesting read from their site: https://blog.mecheye.net/2020/06/modern-graphics-apis-1-intr...


I've had a few blog posts in the works for a little while (as someone else already found!), but they've been on the backburner for a bit as my "main side project" (heh) has switched to my YouTube channel instead.


When you say that OpenGL is a mess are you including all of OpenGL? Would your rankings change if you picked say OpenGL 3.3 Core profile or 4.x+ core?

With the caveat that I only ever really learned 3.3 (not much reason for most 4.x features for me) and I've never used any of the other APIs, I would say that yes the old OpenGL APIs were terrible. They were a poor mapping to how GPUs worked, especially as time went on, and having all the versions with very different ways of doing things kludged together made it a mess. However, I think what they did at the 3.3/4.0 transition cutting away the old cruft made a huge difference and it feels much cleaner and nicer to use.

So with that definition of OpenGL (ie 3.3/4.x core), do you still think it gives an inaccurate impression of how GPUs work?

Even if it does to some extent, I stand by my original position. An intro 3D graphics course should use OpenGL 3.3 (or newer) rather than anything else. This lets students use whatever OS they want without bogging them down in unnecessary annoying details for no benefit with Vulkan (or dealing with MoltenVk on Macs, not sure how seamless that is).

EDIT: I should add that I know you said you were not factoring in portability in your ranking, but I think you should. Windows has not been as dominant with college students (or young people in general) for a long time. I'm sure I don't need to tell you that macs are hugely popular with college students, and there are a decent minority of linux users too these days (It's certainly way more user friendly than it was when I was in college, having to deal with ndiswrapper for broadcom wifi on my XPS m1530).


I think you can find a good enough mapping. But the core confusions are still there, you have to build a manual list of things to ignore:

1. Never use old-school glUniforms (use UBOs)

2. Never use the old glTexImage functions (use glTexStorage)

3. Always use the new DSA stuff (don't bother with the old binding stuff, except when they forgot to make DSA replacements)

4. Don't bother using render buffers, just use textures.

5. Make sure to use the new, new, new vertex buffer system (or just move your vertex fetching to pulling), VAOs are unfortunate

6. Still too easy to accidentally stall the mainloop with accidental stalls (e.g. never call glReadPixels unless you have a PBO bound)

You still have an unfortunate thing in the case of the default FBO, which gives you a depth buffer on your scanout you probably don't want. Maybe there's a way configure out of that, I forget if e.g. eglChooseConfig lets you opt out of the depth buffer, but I don't believe it's possible on wgl/glx.

There's others I can think of but you just have to be extra careful and the number of concepts still obscures how a GPU works. Maybe I'll write more about this at some point.


I suspected you'd say something like this. I'm going to respond to each and then my overall response.

I think I used UBOs once, just to try them from a textbook example. They seemed like a pain for no gain for my use case.

I've never even heard of glTexStorage. Looking into it, it didn't come out till 4.2 (and glTextureStorage 4.5). Reading about it, I can't see the point if you never load anything other than level 0.

DSA stuff likewise didn't come out till 4.5.

I've rarely rendered to anything other than the default FBO but I can get behind just using textures when I need to rather than RBOs (I think that's what I did anyway). Never understood why they existed.

I don't know what the "new, new, new vertex buffer system" is but I assume it's not available until 4.5+

I'm not sure I've ever called glReadPixels. Maybe once a long time ago?

I'm not sure why you wouldn't want a depth buffer by default for 3D rendering.

So clearly what I consider the cutoff for Modern OpenGL is not new enough to match the GPU or maximize performance. I accept that. I think matching the GPU and/or maximizing performance is irrelevant and even detrimental to learning. In addition, targeting OpenGL 4.5 or newer will rule out people with older hardware. My desktop is from 2011 for example and I know plenty of people have even older hardware that still works fine.

A newbie graphics developer isn't going to be doing anything that remotely strains the GPU (even an iGPU). They're probably not even going to get to rendering to textures, let alone doing anything other than simple forward rendering. They're not going to be using more than 2 or 3 relatively simple shaders in a program with at most a handful of uniforms and inputs. They're not going to be using the geometry shader, let alone the tessellation shaders. That can come later, if they want to keep going. In an intro course, other than the basics of graphics, they'll probably be covering things tangential to actual graphics that help with functionality, like AABBs, various mesh data structures and their pros and cons, maybe somethings about splines or basic animation, things like that.

A year after my intro 3D graphics course (fall 2011 iirc), I took an "independent study" supervised by the same professor that we called "Advanced Shading" that consisted of 3 self directed projects over the semester. They were basic shadow mapping, basic deferred rendering, and a poor attempt at simulating global illumination. I have never done any of those 3 things (or anything more advanced) since then. That should give an idea of where I'm coming from.

I think the problem is the confusion between what I consider Modern OpenGL (ie a broadly supported good abstraction level and API for learning), and what people consider Modern Graphics best/common practices. What's good and essential for professional graphics development is not the best place to start for learning the fundamentals. The comments elsewhere in this thread about how modern techniques are practically making triangles irrelevant is a good example.

EDIT: clarification


> Windows has not been as dominant with college students (or young people in general) for a long time.

That depends pretty much where one is located on the globe.


Even if there are places where it's 95% or even 99% windows, I don't think we should force the 1-5% to change their OS or deal with VMs (if you can even get graphics passthrough working) or have to work in the school computer labs.


It is called democracy, the 1% has to oblige by what the 99% want to have.


That logic doesn't really apply here, partly because it is not the students choosing what is taught, it's the professor/writer/creator.

Choosing to teach graphics using a cross platform API like OpenGL doesn't force anyone to deal with the inconveniences I listed before. Everyone can use what they already have. And they learned an API that will let them develop cross platform applications as an added bonus.

Also, if we're referring to a course, it's not like anyone is forcing them to take the course. If a Windows user doesn't want to learn OpenGL they don't have to sign up for a course that teaches it. 3D graphics is not a required course in any CS or related degree I've ever heard of. It's usually one of several choices for upper division electives.

If we're referring to creating learning material in general, a book or tutorial or something, again, why not use something that will work for the broadest cross section of potential learners. Creating new content teaching OpenGL doesn't remove the availability of educational content for DirectX or Metal etc.


> Has anyone built a library on top of Vulkan, targeted roughly around the abstraction level of OpenGL, but with a better design?

WebGPU more or less fills this gap. It's built on top Vulkan/Metal/DirectX12 primarily, and targeted as a WebGL2 replacement, but there's also major interest (and right now, better support) around using it natively for games and other programs. There are bindings for Rust, C++, C, and Javascript (either through the browser, or using deno for native).

Here's an excellent tutorial that uses Rust, although the API is more or less identical across languages https://sotrh.github.io/learn-wgpu/.


https://docs.imgtec.com/Introduction_to_APIs/Vulkan_Migratin...

This is a very nice comparison for people looking to explore Vulkan...even versus native frameworks like Metal (on OSX).

P.S. I dont think OpenGL is supported on OSX anymore. However Vulkan has MoltenVK which works on OSX - https://github.com/KhronosGroup/MoltenVK MoltenVK is officially supported and is a big part of the Vulkan Portability initiative - https://www.vulkan.org/portability

MoltenVK is fully production ready. Multiple games use MoltenVK to run on OSX using Vulkan. Including the big one - DOTA 2 (https://www.gamingonlinux.com/2021/09/dota-2-to-drop-opengl-...). The bug tracker for DOTA2/Vulkan is here - https://github.com/ValveSoftware/Dota-2-Vulkan

https://www.youtube.com/watch?v=xDGQcjqpYqI (M1 macbooks running moltenvk games)


OpenGL is still supported on macOS, though its use is discouraged. (This is what Apple means when it says it's "deprecated".)


Apple's OpenGL implementation has always been mediocre even when it was technically supported.


Last time I had to use it, it had an old version and was missing some features, disallowing use of compute shaders.


OpenGL on macOS has been stuck on 4.1 + some 4.2 extensions since 2012 IIRC.


IMHO check out Three.js for learning and beginners, it's a very nice abstraction and scene graph that you can just start hacking on right in your browser on any device.


Fabrice Bellard released an OpenGL 1.x/2.x software renderer 20 years ago [0]. There seems to others who have taken to updating it more recently [1][2]. It is direct mode or immediate mode but this is how we used to do graphics more than 15 years ago. Might help others to read this if you're interested.

[0] https://bellard.org/TinyGL/

[1] https://github.com/C-Chads/tinygl

[2] https://github.com/ska80/tinygl


> I took a 400 level 3D Graphics course in college in fall 2010 months after OpenGL 3.3/4.0 was released. It was taught based on the original Red Book using OpenGL 1.1. Fortunately, the professor let me use OpenGL 3.3 as long as I met the assignment requirements. Sadly, college graphics professors still teach 1.x and 2.x OpenGL today in 2020

This one hits home. I did my first (and only) graphics programming in college in 2019 and it was indeed OpenGL 1.1 to make matters even more stupid, it was a course where the opengl part was only half the course, the other half was about image processing using matlab...

I dont know what it is about graphics programming, I've tried many times to learn it but its so different to any of the other programming I've done that it feels totally impenetrable.


> This one hits home. I did my first (and only) graphics programming in college in 2019 and it was indeed OpenGL 1.1 to make matters even more stupid, it was a course where the opengl part was only half the course, the other half was about image processing using matlab...

Why is this so relatable? First (and only) graphics course in 2019, fixed (1.x) pipelines, down to memorising individual API calls for the (paper) exam. To their credit, the professor admitted it was out-of-date: "You're never going to use these in real work, it's only here to teach you rotations with matrices etc."

I wonder what graphics programming looks like these days. DL anti-aliasing?


> Using modern OpenGL to introduce all the standard concepts ... is much better than trying to teach them Vulkan ... and obviously better than teaching OpenGL API's that are decades old.

I disagree on teaching old APIs. This is a pet peeve of mine, and I thought the author of a software rasterizer would agree with me.

GPU APIs are some of the only APIs where you have to pay to upgrade. Sure, Vulkan has been "out" for 4 years. But I haven't bought any hardware that supports Vulkan. My last purchase was a used laptop with an iGPU. I buy hardware as a last resort when I absolutely need it, I especially don't buy hardware for the APIs when my old hardware has enough GFLOPS.

I like that programming is a cheap hobby. I don't see the point in anteing up every few years. Per-core CPU power has plateaued, and I can add more storage to old systems easily. I just don't want to buy new hardware until I see the GPU space settle down for a few years. And when will that happen? Vulkan 2? Vulkan 3?

How many more perfectly functional TFLOPS of GPU chips will be branded e-waste before the APIs are stable? I don't like the implication among some gamedevs that your programming skill depends on how much tribute you've paid to NVidia. I don't make bad games because I use GLES 2, I make bad games because I'm a bad gamedev.

GLES 2 is a sweet spot in the tradeoff between compatibility and features. It dates to about 2007, so even a kid with a cheap used Android phone could theoretically write GLES 2 code for it. It doesn't have geometry shaders, but with vertex and fragment shaders you can still learn things like environment mapping, gamma-correct per-pixel lighting, hardware skinning, post-processing effects, etc.

It just works, almost everywhere, and I don't see the point in writing off old hardware as trash before the end of its natural life.


Vertex skinning is a pain in GLES 2 because of uniform size limits, since there are no UBOs. You can only have so many bones before you overflow those limits and then basically the only option I can see is to start packing your bone matrices into a texture (and since it's GLES 2 it has to be RGBA 8-bit), which is pretty awful.

GLES 2 also has no unbounded loops in shaders, which makes some postprocessing effects like blurs a big pain to write (you need a separate shader for every blur radius). Granted, most drivers don't enforce this, since it's so obnoxious.


TBH, UBOs are quite useless in GLES3/WebGL2 (the guaranteed size is too small, and dynamically updating without sync issues isn't trivial because the under-the-hood details seems to be different in each GL implementation). If you really need to render lots of skinned characters, you'd use a single texture anyway for uploading the joint data of all characters (e.g. this demo renders hundreds of independently animated characters in a single drawcall in WebGL1):

https://floooh.github.io/sokol-html5/ozz-skin-sapp.html


Why is putting bone matrices in a texture awful? Seems great to me.


Because you're packing unbounded floating point values into an 8-bit RGBA unorm texture. Yes, it can be done. Dealing with the issues is annoying, though.


I'm not so sure you disagree that much with the author as you think.

The API's they're arguing against teaching are OpenGL 1 and 2. OpenGL 2.0 dates to 2004. OpenGL 1.0 dates to 1992.

OpenGL 3.0 was released in 2008. OpenGL ES 2.0 was released in 2007. This rasteriser is modeled on OpenGL 3.x. So your recommendation and what they consider [the start of] "modern OpenGL" are about a year apart.


IMHO the OpenGL 1.x API style still has a place in many areas where rendering performance doesn't matter much.

The matrix stack and begin/end functions are a very convenient way to get some 2D or 3D triangles and lines on screen (vs having to deal with buffers and shaders).


I say the same thing about 3.3. See my comments with Jasper.

I actually consider dealing with buffers and shaders to be far nicer and clearer than dealing with the old 1.x style API but I suppose for someone already intimately familiar with it that might not be the case.

I understand if someone already has an existing codebase that uses the old stuff, but if you were starting a project from scratch it really isn't that hard to use 3.3+. And once you do it once, you never have to think about the setup process again, you already have your helper functions for loading shaders and other things. You don't even have to write your own, there are so many helper libraries out there, and things like GLM to give you every math related thing you'd ever need.


You're right. I consider 3.3 core and newer to be "Modern OpenGL". And the biggest reason I say 3.x is I know I will never support the geometry shader (or transform feedback though that came earlier).


This is pretty cool, I've looked for libraries like this myself in the past. Sometimes I've needed to render a bunch of simple 3D graphics off-screen, without any real-time or high-throughput requirements (mostly for visualizing 3D stuff in robotics/computer vision). Setting up GPU-based pipelines for this is possible, but can be kind of kludgy and annoying. Mesa3D can do opengl software rasterization, but it's also a nontrivial dependency. I've ended up writing crappy rasterizers in the past just to avoid all that hassle. A single-header solution for this that's more up-to-date than TinyGL is welcome.


Aside about that README file: its laid out better than plenty of research papers I've been reading lately and the professors who ostensibly wrote those papers should know better. Kudos just for that.


Thanks! I put a lot of time into my initial draft/structure and I thought it turned out well so I appreciate the kudos. Still have to fix the inevitable typos though.


that's pretty nice!

any plan for GLES 3.0? something that would allow to target WASM too


If you want GLES 3.0 why not use ANGLE? It works on top of OpenGL, Vulkan, Metal, DirectX 11

https://chromium.googlesource.com/angle/angle

It's also fairly well tested given that it's used in Safari, Firefox, Edge, and Chrome


angle is different, i don't want any of vulkan/metal/directx

i want opengl

i don't need bloat and carry what i don't need

it just adds useless complexity

also building angle is weird

it is bloat driven culture, i am against that


You say you don't want bloat but OpenGL by definition, is bloat. Then you apparently want a wrapper on top if it. Ok?!?!?


isn't opengl3 not recommended anymore?


I'd consider OpenGL 3.3 today's lowest common denominator, if you are after backwards compatibility. It removed the intermediate mode pipeline and introduced the Core profile, making it a good candidate for the lowest thing to target. In OpenGL 2.1 you can pull in essential features like shader4 and UBOs via extensions, but 3.3 made it core. New ways to interact with OpenGL like DSA are generally available via extensions as well. It's "not recommended" mainly because Apple is the odd one out in removing support for it.


OpenGL ES 3.3 is arguably not "lowest comment denominator" if you add in Android, iOS, iPadOS, AppleTV, Raspberry PI, and Web


Q: What is intermediate mode? Is it an accidental "autocorrect" of immediate mode or an actual thing?


Ohh, yeah! Supposed to be immediate mode. I swear I typed it correclty O.o




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: