OK, so in C you can smash the stack and that's bad. True. But I think you fail to see the larger point I'm attempting to evoke: that array bounds are artificial in the first place, and this doesn't just surface in C.
The "heartbleed in rust" example is a great one, and it arises in real life in many high level language APIs for file I/O and sockets. You have an allocation, and you have a count of available bytes coming back from a read() function which may be lower than the allocation size. So you are creating a "virtual" array bound from nothingness. Fail to respect it (without bounds checks) and you will see bugs.
If you reject that this is a valid way to write code, maybe in your API every read() style function will always return the correct size enforced by your JVM or whatever, but you will do too many allocations and over-tax the GC.
If you accept that this makes sense, then you must embrace a more C style way of thinking, where array bounds are created and destroyed at will and must be enforced through your own actions... And suddenly you see the other side of this coin, which reflects valid and true things about the universe, that you may want to chop up a buffer into multiple pieces - and that's OK.
(Now, I wouldn't be surprised if Rust has mechanisms to chop up arrays in the way I describe and enforce the bounds you provide it... Which would be handy. But frankly does not completely destroy the validity of the C approach or substitute for a proper understanding of it. Without that understanding, you will code more heartbleeds.)
His point is that you're not forced to do that. And anyhow, that doesn't solve the issue since you can bungle the creation of the slice with the wrong offset or length.
Not bungle in the sense of overflowing the underlying buffer, but overflowing the logical buffer that is contained within it, i.e. getting the wrong slice.
The "heartbleed in rust" example is a great one, and it arises in real life in many high level language APIs for file I/O and sockets. You have an allocation, and you have a count of available bytes coming back from a read() function which may be lower than the allocation size. So you are creating a "virtual" array bound from nothingness. Fail to respect it (without bounds checks) and you will see bugs.
If you reject that this is a valid way to write code, maybe in your API every read() style function will always return the correct size enforced by your JVM or whatever, but you will do too many allocations and over-tax the GC.
If you accept that this makes sense, then you must embrace a more C style way of thinking, where array bounds are created and destroyed at will and must be enforced through your own actions... And suddenly you see the other side of this coin, which reflects valid and true things about the universe, that you may want to chop up a buffer into multiple pieces - and that's OK.
(Now, I wouldn't be surprised if Rust has mechanisms to chop up arrays in the way I describe and enforce the bounds you provide it... Which would be handy. But frankly does not completely destroy the validity of the C approach or substitute for a proper understanding of it. Without that understanding, you will code more heartbleeds.)