"Many years later we asked our customers whether they wished
us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions
would have long been against the law."
The difficulty is that in the days when C was being designed, computers were much more irregular than they are today. There were one's complement machines, machines that didn't have power-of-two word sizes, no standardization of character sets, IEEE floating point hadn't been invented yet. The irregular machines weren't fringe stuff, they were the dominant architectures (IBM 360/370, DEC PDP-10, Pr1me, just a big collection of weird stuff). And compiler technology was much less advanced. So C was a messy compromise.
High level system programming languages are about 10 years older than C, which in its early days only cared to target the PDP-11 model used for the first UNIX rewrite.
Authors just chose to ignore what was already out there and do their own thing instead.
Well I guess it is lesson for both: Those who think technology is better because it is so successful and who think technology will be successful because it is so much better than other things.
Iirc, C was also designed by committee, and a lot of industry players got some of their grubby hands on the spec... (I could be wrong) but I belive the utter mess that is short/long/char sizes arises from hardware manufacturers wanting their code to be "trivially portable" across platforms with different machine words.
(note: "we" excludes me personally but I mean it as "us who work and study comptuing at large")