I strongly disagree that this is "the right way". I think that the platform provides low level primitives that are _designed_ to have abstractions built upon them.
Doing it like this has the potential to be the most performant, but it does so in the same way as writing your programs directly in assembly is potentially performant.
I also don't think that the source code is particularly readable for me, and contains lots of magic numbers and very imperative code. I would personally find it a lot more readable if it was written in some sort of declarative way using a library, even if I have to look at a GitHub repo instead of view source.
> but it does so in the same way as writing your programs directly in assembly
> contains lots of magic numbers and very imperative code
Well, we really don't know if the code was written in this form by hand, don't we.
It could have been compiled into this, to use your words, "assembly with magic numbers and imperative" from much more elegant form. We may see this form only because this is what browsers understand.
I am not saying it was compiled, just speculating that seeing pure WebGL does not mean it was pure WebGL to begin with.
When there's physics, graphics and mathematics, there are magic numbers, which are results of formulas which needs to be computed once, or material properties, or nature's constants.
Also, nature and graphics works as an imperative parallel machine. So the code mirrors that.
This is not written deliberately this way. Code comes out like that when you strip all the libraries, fluff, and other less-related stuff.
I also write a scientific application, and yes, This is the way.
Abstraction is the only thing that makes any of our advancements possible. Not even the simplest of math theses could be proves without a “framework” of relevant lemmas, nor could you write even a single hello world without the layers upon layers of abstractions written carefully over the decades. Sure, there is also bad abstraction, but the problem is the bad part, not the concept itself.
Without abstractions you wouldn’t be able to read a text stored on a remote computer with accompanying style information displayed the same on both of our devices and with embedded 3D graphics doing the same thing on vastly differing devices be it a top of the line GPU or a simple low-end phone. Is it not abstraction?
Well, if the abstractions were peer reviewed and put through the same rigour as mathematical proofs, that's a whole different topic.
The equivalent would be a mathematical services company, who created "free" abstraction packages that required you to rewrite all your math, away from the scientific community standards, to fit their abstractions, and who also made money on consulting and selling books. And the big benefit of it all, is really that they only abstracted away writing summaries of your papers, which is actually the easiest part that is quite irrelevant to your research.
But it is not math - we only have empirical evidence and not even much from that.
Who is to tell whether the OSI model is ideal? It is more than likely not it, but we can’t measure these things up front, there is an insane cost associated with changing it, etc. Yet again, what is the alternative? We can’t manage complexity any other way, and essential complexity can’t be reduced.
i mostly mean the heap of stuff people often throw at problems. of course you cant do anything without abstractions. it helps to understand them better though.
Doing it like this has the potential to be the most performant, but it does so in the same way as writing your programs directly in assembly is potentially performant.
I also don't think that the source code is particularly readable for me, and contains lots of magic numbers and very imperative code. I would personally find it a lot more readable if it was written in some sort of declarative way using a library, even if I have to look at a GitHub repo instead of view source.