Agreed, The example the author uses in his previous post on the topic talks about cache-unaware code but it's perfectly possible to write cache-unaware code in machine language as well.
I'd say "C is not how the computer works ... but it's much, much closer than nearly every other language."
I feel your statement is actually much closer to the danger. C is pretty close to the hardware indeed and that misleads people into thinking that C is actually exactly how the hardware works. It's a very easy trap to fall into and I've seen many colleagues do just that.
I don't see why it would. Given what modern CPUs do to make memory fast and safe, pointer arithmetic is a ridiculous simplification that has no bearing in reality.
That is no argument in this discussion, IMHO - what does it change? The discussion is about the language (and in case of Java, the runtime) and it's machine abstraction, which is nearly the same for Java and C, even if JVM was written in Lisp.
My point was that many people who just now come into C fall into the trap of thinking that it [almost] exactly mimics hardware. Which is an easy mistake to make if you are new to the thing because it's taught as if it's a panacea and sadly many people believe those university courses.
As for former colleagues, oh well, we all learn and grow. I chose to bail out because something as non-deterministic as a C's code stability when cross-compiled for anything more than two systems turned me off. Different strokes, different people.
Right. But machine code is the interface that the hardware provides to the rest of the world, meaning you can ignore that the hardware is doing something different. And in fact, you don't really have access to what the machine is doing. Or maybe you can get at all those virtual renamed registers? I can't, I can just access the architectural registers.
Now if you care about performance, you might not want to ignore the real machine, but it really bis the hardware's job to work like the machine code tells it to. And when it doesn't, that's considered a bug^H^H^H exploit.
We tend of think of everything as an ideal model in a vacuum, forgetting that in real life that the implementation details are often gory with caches and microcode and fault tolerance and all sorts of hidden details put in there to make things easier to work with or faster.
Hardware works as physics allows, at energies we can’t comprehend.
A computer is designed to a spec we can (sort of) comprehend. Spectre and such being evidence we don’t fully comprehend it.
It’s a recursive back and forth of this is a structure, this is the favored algorithm for that structure.
The hardware is the structure. Physics provides the algorithms for operating on that structure.
Thinking like one can see inside the machine always struck me as absolutely ridiculous.
It exists in the world defined by physics. Crawling into our imagination, linking abstract pictures from textures together arbitrarily, doesn’t mean we’ve discovered something new.
We know all kinds of stuff can be modeled on a compute because reality already gave us the math model.
We’re just brute forcing implementation looking for simple models for things we already have
Never has been. Deep down 1's and 0's are still analog voltages, subject to capacity and inductance, on a tiny wire between transistors. Without proper timing you end up reading the voltage half way between ascending from 0 to 1 and who knows what value you end up with.
If you violate the timing for a flip-flop circuit, you can put it into a metastable state where the "digital" output varies between 0 and 1 for a very long time.