I say the same thing about 3.3. See my comments with Jasper.
I actually consider dealing with buffers and shaders to be far nicer and clearer than dealing with the old 1.x style API but I suppose for someone already intimately familiar with it that might not be the case.
I understand if someone already has an existing codebase that uses the old stuff, but if you were starting a project from scratch it really isn't that hard to use 3.3+. And once you do it once, you never have to think about the setup process again, you already have your helper functions for loading shaders and other things. You don't even have to write your own, there are so many helper libraries out there, and things like GLM to give you every math related thing you'd ever need.
You're right. I consider 3.3 core and newer to be "Modern OpenGL". And the biggest reason I say 3.x is I know I will never support the geometry shader (or transform feedback though that came earlier).
Thanks! I put a lot of time into my initial draft/structure and I thought it turned out well so I appreciate the kudos. Still have to fix the inevitable typos though.
When you say that OpenGL is a mess are you including all of OpenGL? Would your rankings change if you picked say OpenGL 3.3 Core profile or 4.x+ core?
With the caveat that I only ever really learned 3.3 (not much reason for most 4.x features for me) and I've never used any of the other APIs, I would say that yes the old OpenGL APIs were terrible. They were a poor mapping to how GPUs worked, especially as time went on, and having all the versions with very different ways of doing things kludged together made it a mess. However, I think what they did at the 3.3/4.0 transition cutting away the old cruft made a huge difference and it feels much cleaner and nicer to use.
So with that definition of OpenGL (ie 3.3/4.x core), do you still think it gives an inaccurate impression of how GPUs work?
Even if it does to some extent, I stand by my original position. An intro 3D graphics course should use OpenGL 3.3 (or newer) rather than anything else. This lets students use whatever OS they want without bogging them down in unnecessary annoying details for no benefit with Vulkan (or dealing with MoltenVk on Macs, not sure how seamless that is).
EDIT: I should add that I know you said you were not factoring in portability in your ranking, but I think you should. Windows has not been as dominant with college students (or young people in general) for a long time. I'm sure I don't need to tell you that macs are hugely popular with college students, and there are a decent minority of linux users too these days (It's certainly way more user friendly than it was when I was in college, having to deal with ndiswrapper for broadcom wifi on my XPS m1530).
I think you can find a good enough mapping. But the core confusions are still there, you have to build a manual list of things to ignore:
1. Never use old-school glUniforms (use UBOs)
2. Never use the old glTexImage functions (use glTexStorage)
3. Always use the new DSA stuff (don't bother with the old binding stuff, except when they forgot to make DSA replacements)
4. Don't bother using render buffers, just use textures.
5. Make sure to use the new, new, new vertex buffer system (or just move your vertex fetching to pulling), VAOs are unfortunate
6. Still too easy to accidentally stall the mainloop with accidental stalls (e.g. never call glReadPixels unless you have a PBO bound)
You still have an unfortunate thing in the case of the default FBO, which gives you a depth buffer on your scanout you probably don't want. Maybe there's a way configure out of that, I forget if e.g. eglChooseConfig lets you opt out of the depth buffer, but I don't believe it's possible on wgl/glx.
There's others I can think of but you just have to be extra careful and the number of concepts still obscures how a GPU works. Maybe I'll write more about this at some point.
I suspected you'd say something like this. I'm going to respond to each and then my overall response.
I think I used UBOs once, just to try them from a textbook example. They seemed like a pain for no gain for my use case.
I've never even heard of glTexStorage. Looking into it, it didn't come out till 4.2 (and glTextureStorage 4.5). Reading about it, I can't see the point if you never load anything other than level 0.
DSA stuff likewise didn't come out till 4.5.
I've rarely rendered to anything other than the default FBO but I can get behind just using textures when I need to rather than RBOs (I think that's what I did anyway). Never understood why they existed.
I don't know what the "new, new, new vertex buffer system" is but I assume it's not available until 4.5+
I'm not sure I've ever called glReadPixels. Maybe once a long time ago?
I'm not sure why you wouldn't want a depth buffer by default for 3D rendering.
So clearly what I consider the cutoff for Modern OpenGL is not new enough to match the GPU or maximize performance. I accept that. I think matching the GPU and/or maximizing performance is irrelevant and even detrimental to learning. In addition, targeting OpenGL 4.5 or newer will rule out people with older hardware. My desktop is from 2011 for example and I know plenty of people have even older hardware that still works fine.
A newbie graphics developer isn't going to be doing anything that remotely strains the GPU (even an iGPU). They're probably not even going to get to rendering to textures, let alone doing anything other than simple forward rendering. They're not going to be using more than 2 or 3 relatively simple shaders in a program with at most a handful of uniforms and inputs. They're not going to be using the geometry shader, let alone the tessellation shaders. That can come later, if they want to keep going. In an intro course, other than the basics of graphics, they'll probably be covering things tangential to actual graphics that help with functionality, like AABBs, various mesh data structures and their pros and cons, maybe somethings about splines or basic animation, things like that.
A year after my intro 3D graphics course (fall 2011 iirc), I took an "independent study" supervised by the same professor that we called "Advanced Shading" that consisted of 3 self directed projects over the semester. They were basic shadow mapping, basic deferred rendering, and a poor attempt at simulating global illumination. I have never done any of those 3 things (or anything more advanced) since then. That should give an idea of where I'm coming from.
I think the problem is the confusion between what I consider Modern OpenGL (ie a broadly supported good abstraction level and API for learning), and what people consider Modern Graphics best/common practices. What's good and essential for professional graphics development is not the best place to start for learning the fundamentals. The comments elsewhere in this thread about how modern techniques are practically making triangles irrelevant is a good example.
Even if there are places where it's 95% or even 99% windows, I don't think we should force the 1-5% to change their OS or deal with VMs (if you can even get graphics passthrough working) or have to work in the school computer labs.
That logic doesn't really apply here, partly because it is not the students choosing what is taught, it's the professor/writer/creator.
Choosing to teach graphics using a cross platform API like OpenGL doesn't force anyone to deal with the inconveniences I listed before. Everyone can use what they already have. And they learned an API that will let them develop cross platform applications as an added bonus.
Also, if we're referring to a course, it's not like anyone is forcing them to take the course. If a Windows user doesn't want to learn OpenGL they don't have to sign up for a course that teaches it. 3D graphics is not a required course in any CS or related degree I've ever heard of. It's usually one of several choices for upper division electives.
If we're referring to creating learning material in general, a book or tutorial or something, again, why not use something that will work for the broadest cross section of potential learners. Creating new content teaching OpenGL doesn't remove the availability of educational content for DirectX or Metal etc.
I actually consider dealing with buffers and shaders to be far nicer and clearer than dealing with the old 1.x style API but I suppose for someone already intimately familiar with it that might not be the case.
I understand if someone already has an existing codebase that uses the old stuff, but if you were starting a project from scratch it really isn't that hard to use 3.3+. And once you do it once, you never have to think about the setup process again, you already have your helper functions for loading shaders and other things. You don't even have to write your own, there are so many helper libraries out there, and things like GLM to give you every math related thing you'd ever need.