> The storage order, the endianness, as given for my machine, is called little-endian. A system that has high-order representation digits first is called big-endian. Both orders are commonly used by modern processor types. Some processors are even able to switch between the two orders on the fly.
Calling big endian "commonly used by modern processor types" when s390x is really the only one left is a bit of a stretch ;D
(Comments about everyone's favorite niche/dead BE architecture in 3… 2… 1…)
The book does say "Both orders are commonly used by modern processor types".
I'd say this sentence is quite misleading, since it would lead you to believe two falsehoods:
1. That both byte orders are equally prevalent in the wild, particularly in systems that are expected to run modern C code.
2. That both byte orders are equally likely to be found in "modern" (new or updated) processor design.
It's not entirely incorrect, but a better phrasing could be used to clarify that little-endian is the more modern and common storage order, but you still cannot ignore big-endian.
The problem about being pedantic is that you can choose different directions to be pedantic in. My "direction" is that code isn't written in a vacuum, it mixes with code millions of other people wrote and runs on machines millions of other people built. As such:
My concern isn't that the phrasing in the book is wrong, and I have expressly not argued that. It's that it presents the issue as having no further depth, and these two choices as equivalent. They aren't. The "Some processors are even able to switch between the two orders on the fly." that follows makes it even worse, at least to me it really sounds like you needn't give any care.
And the people reading this book are probably the people who should be aware of more real-world background on endianness, for the good of the next million of people dealing with what they produced.
MipsBE is very common in edge devices on many networks. You may have 5 MipsBE devices in your home or office without realizing. It's almost never an issue so nobody cares, but they are common.
> Calling big endian "commonly used by modern processor types" when s390x is really the only one left is a bit of a stretch ;D
POWER is bi-endian. In recent versions, Linux on POWER is little-endian (big-endian Linux on POWER used to be popular, until all the distros switched some years back), while AIX and IBM i are big-endian.
AIX and IBM i are probably not quite as alive as IBM mainframes are, but AIX is still arguably more alive than Solaris or HP/UX are, to say nothing of the dozens of other commercial Unix systems that once existed. Likewise, IBM i is just hanging on, yet still much more alive than most competing legacy midrange platforms (e.g. HP MPE which has been officially desupported by the vendor, although you can still get third party support for it.)
True - but at the same time, about half¹ of it is mipsel, i.e. in little-endian mode :). It's also in decline, AFAICS there is very little new silicon development.
Many routers use the MIPS ISA and they can be rooted to get shell access. That's what I did with an old Netgear router, which was like a very low spec SBC. If you have a PS2 lying around, you could try that.
Indeed, if it meant "currently widespread" there'd be a stronger argument for Big Endian with a lot of MIPS and PPC chugging away silently. But interpreting "modern" as recent development, BE is close to gone.
Uh, why so serious? I called it "a bit of a stretch ;D" - there was a reason for that smiley. I'm well aware BE is alive enough to be around.
If you can't live without knowing, sure, my stake in dismissing big endian architectures is that I can't in fact dismiss BE architectures because I have users on it. And it's incredibly painful to test because while my users have such hardware, actually buying a good test platform or CI system is close to impossible. (It ended up being Freescale T4240-QDS devkits off eBay. Not a good sign when the best system you can get is from a company that doesn't exist anymore.)
And at some point it's a question about network protocols/encodings being designed to a "network byte order" determined in the 80s to be big endian. When almost everything is LE, maybe new protocols should just stick with LE as well.
To be fair to IBM, with s390x they do have a "community cloud" programme where open source projects can apply to get a Linux s390x VM to use for things like CI: https://community.ibm.com/zsystems/l1cc/ . But yeah, BE MIPS is super awkward because the target systems are all embedded things that are bad dev/CI machines.
Stupidly enough, "my" software is a routing control plane (FRRouting), so what I need to support are exactly those embedded things. I'm not sure anyone uses FRRouting on a s390x machine. But maybe we should go ask IBM anyway, a BE system is a BE system…
qemu CPU emulation exists too, but that's painfully slow for an actual CI run, and I'm not sure I trust it enough with e.g. AF_NETLINK translation to use the "-user" variant on top of an LE host rather than booting a full Linux (or even BSD).
And in the very best case, proper testing would pit BE and LE systems "against" each other; if I run tests on BE against itself there's a good risk of mis-encodings on send being mis-decoded back on receive and thus not showing as breakage…
… really, it's just a pain to deal with. Even the beauty (in my eyes) of these T4240 ppc64 systems doesn't bridge that :(
Fair—my bad, I can fail at reading tone sometimes.
Would you propose the C abstract machine abstracting away endianness entirely as an alternative? My understanding is that deprecating support for existing architectures is discouraged to every practical extent.
Maybe we failed to communicate because our brains have different endianness? :D
To be honest, I don't think this is a solvable problem. (Changing the C machine concept doesn't do much if you need to process network traffic that uses both (e.g. IP packet [big endian] carrying protobuf [little endian]). It's already mostly a question of data ingress/egress.)
What is solvable though is making sure people are sufficiently aware. And the people who read a book like "Modern C" are probably a very good target audience, building low-level bindings and abstractions. They should know that LE and BE are technically a free-floating design choice, but practically the vast majority of systems is LE now. But at the same time, yes, BE isn't extinct, and won't be any time soon… and it's left to them to make their best possible design given their environments.
Calling big endian "commonly used by modern processor types" when s390x is really the only one left is a bit of a stretch ;D
(Comments about everyone's favorite niche/dead BE architecture in 3… 2… 1…)