Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why is least significant bit first a thing?
3 points by raasdnil on Feb 13, 2023 | hide | past | favorite | 6 comments
I've had a long desire to get into basic electronics, and recently got the whole Ben Eater breadboard kit to get going (awesome by the way). But I can't help but wonder, why is there such a thing as least significant bit first? It seems that it just makes it that little bit harder for humans to decode with no real upside as surely the computer doesn't care? I took a bit of a look around and couldn't find anything definitive as to the origin story, thought some of the boffins on HN might actually _know_.



You’re correct in thinking that there is nothing magical about LSB or MSB. As long as everything agrees, we get the results we’re expecting. Though for serial communication, networking, and similar, don’t assume anything :)

Historically, order is a little more interesting. Our earlier computers conveniently had instructions, registers, and memory all exactly the same width (perhaps 27, 36, or 60 bits). MSB/LSB was not a thing. But shortly after the “dawn of time” computers started to provide double- triple- and quad- data and instructions. Memory was expensive and if most data/instructions fitted into X bits (maybe 12, 32 or 36 bits) that’s what you used. But some data/instructions need more space, hence “double” floats, “long” ints, or complex instructions. Now designers (and programmers) needed to consider which part of the data/instruction is at which address, and order becomes a thing.

Each designer found advantages to a specific order … they tended to be small advantages and depended on their previous design decisions. But the tech was being pushed to the limit, so small advantages counted. For example, if you store instructions with the opcode at the “start”, then decoding can happen while the normal increment-PC mechanism is used to fetch the rest of the instruction – saves having extra logic to calculate the address of the rest of the instruction – saves a few gates here and a register there.

Computer history is marvellous – I’d recommend Tanenbaum’s “Structured Computer Organisation” and Siewiorek’s “Computer Structures”.


I can't tell if you talking about the order of bit in a wire protocol, or endianness of multibyte memory allocations.

For memory endian-ness is essentially arch determined by cpu reads, so its arbitrary. Little endian has the advantage of sharing addresses of narrowed conversions. But the difference is holy war iirc.

For wire protocols, it seems like the natural ordering to me.


Thanks for the reply, why do you say it seem like a natural ordering for a wire protocol?


because then you can append to a message by sending more bits, instead of queuing and prefixing information


It basically depends how the chip designers laid out the device! Here is an example of a digital design of an ALU http://www.csc.villanova.edu/~mdamian/Past/csc2400fa13/assig...


A serial adder requires that it be given the bits least significant first.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: