No that should not happen. Integer types that adapt to the platform word size enhance portability. Nobody wants a 32-bit default int on an 8-bit platform and using uint8_t or uint16_t can introduce performance regressions on wider platforms. The traditional integer types are perfectly suited for scenarios where the exact width doesn't matter and you know the guaranteed minimum is good enough.
To address the problem you mention, uint_fast8_t and co exist. It is at least 8 bits big, but whatever type is fastest that fits that requirement. So on an 8-bit system it is a uint8_t. on a 32-bit ARM is is a uint32_t. there is also uint_fast16_t and so on...
As the programmer, I don't really care what the h/w does; if I specify u8 and every operation produces results 'as if' the type were 8 actual unsigned bits, eg, when using an underlying u32 type -- great. From 'my model' -- it's still a u8.
But there must be no conceptual 'leaking' (behaviors that I experience using e.g. uint_fast8_t that are in any way different from behaviors I experience when using u8 ).
I don't especially care how the HW works; because my 'virtual machine' is C. (yes, yes, I know Reality intrudes, and sometimes you need to get closer to the machine. But, isn't this because of 'leaking' between 'virtual machine' and 'actual machine' that I mention above?)
I would argue that most code nowdays iplicitly assumes that int is 32bit’s long, and wont work correctly in a 8bit platform anyways. If ’platform size conforming’ ints are used, they probably should be opt-in, instead of opt-out.
That's great for people weaned on Java's false promise of a uniform type system. Then you find out you want the behavior of unsigned integer overflow and have to jump through contortions to get it. You can't set a single standard for the default that works universally.