> It's either there's a standard... or there are no standards, and each go its merry way
In my experience, OpenGL walks the middle line. There is certainly a core set of functions that (almost always, discounting buggy drivers) work. But the core set doesn't span all the critical functions you need for what modern game players would consider a performant game engine (such as hardware synchronization to eliminate "tearing"). So game engines will need to factor in the GL extensions, which put us in the "up to third parties to follow up" world. It's a frustrating environment to work in; you can't really ever trust that your code will either succeed or fail on any given hardware configuration, and you're stuck playing whack-a-mole on bug reports that you lack the hardware to reproduce.
> As for support of the latest features on my old Geforce 7600, I guess I should accept the fact that they cannot be implemented efficiently
I wish it were that simple. That, I could deal with.
I've worked with a card that had a bug in the GLSL compiler. A particular sum simply wasn't compiled, and we had to work around the problem by multiplying by 1.000000000000000001 to force the compiler to generate the bytecode for the whole calculation (the fact this trick works is something one of our veteran engineers "just knew would work," so we got lucky). There is functionally no chance of that software bug ever getting patched; card vendors don't care about older versions of their technology, and even if a driver version were out there that patched the bug, you can't trust machine owners to keep their drivers up-to-date.
More frustratingly, as I mentioned elsewhere, I've worked with cards that implement things that should be high-performance (like stages of the shader pipeline) in software, just to claim they have the capability. Since OpenGL gives you no way in the API to query whether a feature is implemented in a reasonably-performant way, you either do some clever tricks to suss this stuff out (F.E.A.R. has a test mode where it runs a camera through a scene in the game and quietly tunes graphics pipeline features based upon actual framerate) or gather bug reports, blacklist certain card configurations, and keep going.
Old cards not being as powerful as good cards I can deal with; if we could simply say "Your card must be X or better to play," we'd be fine. New cards with bugs and under-performant cards that lie about their performance to clear some market hurdles are the maddening corners of the ecosystem.
> There is functionally no chance of that software bug ever getting patched; card vendors don't care about older versions of their technology,
Well, is that an OpenGL issue? Wouldn't you get that very same problem with D3D?
> reasonably-performant way
I don't understand. How can you objectively define such thing? Doesn't it depend on the workload? If you're pushing 3 tris per frame, any feature can be labeled as reasonably performing, but if you have 300M, can any card these days maintain reasonable framerate even on the most basic settings? I am exagerating on purpose; some apps will require a very small amount of work on each stage of rendering, and could reasonably afford any extra pass, even if software implemented. And in other cases (which might be the majority), it doesn't cut work. I don't see how there could be an objective way of deciding if a feature is sufficiently performant. Your example is telling: an application as complex as F.E.A.R. Should clearly make its benchmarks (or keep a database) to decide which feature can be included without hurting performances. And even then, players have also different perceptions of what constitutes playability.
I agree with you: multiple standards, multiple vendors, multiple products, the fallout is "struggling compatibilities" at worst, "varying performances" at best. But that's a common point between D3D and OpenGL, not a divergence. Am I missing something?
> There is functionally no chance of that software bug ever getting patched; card vendors don't care about older versions of their technology,
Isn't it interesting that not more developers are pushing for open source drivers? Try finding a GLSL compiler bug in mesa and asking in #dri-devel on freenode or bugs.freedesktop.org. There you most likely get a very quick and helpful reply.
In my experience, OpenGL walks the middle line. There is certainly a core set of functions that (almost always, discounting buggy drivers) work. But the core set doesn't span all the critical functions you need for what modern game players would consider a performant game engine (such as hardware synchronization to eliminate "tearing"). So game engines will need to factor in the GL extensions, which put us in the "up to third parties to follow up" world. It's a frustrating environment to work in; you can't really ever trust that your code will either succeed or fail on any given hardware configuration, and you're stuck playing whack-a-mole on bug reports that you lack the hardware to reproduce.
> As for support of the latest features on my old Geforce 7600, I guess I should accept the fact that they cannot be implemented efficiently
I wish it were that simple. That, I could deal with.
I've worked with a card that had a bug in the GLSL compiler. A particular sum simply wasn't compiled, and we had to work around the problem by multiplying by 1.000000000000000001 to force the compiler to generate the bytecode for the whole calculation (the fact this trick works is something one of our veteran engineers "just knew would work," so we got lucky). There is functionally no chance of that software bug ever getting patched; card vendors don't care about older versions of their technology, and even if a driver version were out there that patched the bug, you can't trust machine owners to keep their drivers up-to-date.
More frustratingly, as I mentioned elsewhere, I've worked with cards that implement things that should be high-performance (like stages of the shader pipeline) in software, just to claim they have the capability. Since OpenGL gives you no way in the API to query whether a feature is implemented in a reasonably-performant way, you either do some clever tricks to suss this stuff out (F.E.A.R. has a test mode where it runs a camera through a scene in the game and quietly tunes graphics pipeline features based upon actual framerate) or gather bug reports, blacklist certain card configurations, and keep going.
Old cards not being as powerful as good cards I can deal with; if we could simply say "Your card must be X or better to play," we'd be fine. New cards with bugs and under-performant cards that lie about their performance to clear some market hurdles are the maddening corners of the ecosystem.