That's implementation defined behavior, not undefined behavior. Undefined behavior explicitly refers to something the compiler does not provide a definition for, including "safe defaults."
>Possible undefined behavior ranges from ignoring the situation completely with unpredictable results ... or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message)
So a compiler is absolutely welcome to make undefined behavior safe. In fact every compiler I know of, such as GCC, clang, MSVC has flags to make various undefined behavior safe, such as signed integer overflow, type punning, casting function pointers to void pointers.
The Linux kernel is notorious for leveraging undefined behavior in C for which GCC guarantees specific and well defined behavior.
It looks like there is also the notion of unspecified behavior, which gives compilers a choice about the behavior and does not require compilers to document that choice or even choose consistently.
And finally there is what you bring up, which is implementation defined behavior which is defined as a subset of unspecified behavior in which compilers must document the choice.
Something can be UB according to the standard, but defined (and safe) according to a particular implementation. Lots of stuff is UB according to the C or C++ standard but does something sensible in gcc and/or clang.
This distinction does not exist in K&R 2/e which documents ANSI C aka C89, but maybe it was added in a later version of the language (or didn't make it into the book)? According to K&R, all overflow is undefined.
I don't have my copy of K&R handy, but this distinction has existed since the initial codification. From C89:
3.1.2.5 Types
[...] A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting unsigned integer type.
Yes, as unsigned overflow is fine by default. AFAIK the issue was originally that there were still machines that used ones complement for describing negative integers instead of the now customary twos complement.
In release fast mode, unsigned overflow/underflow is undefined in Zig whereas in C it wraps.
:-)
Of course C has many UBs that Zig doesn't have, so C is far less safe than Zig, especially since you can use ReleaseSafe in Zig..