Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Should FLAG_BIT1 ^ FLAG_BIT3 really be a warning?

Nope, the warning would be for literal numbers in the source. IMO anyone who doesn't define constants for the flag bits deserves the warning.



So 2^8 is a warning, but 2^BITS_IN_BYTE is not? I don't think whether or not the preprocessor helped in making the expression is a good heuristic for whether or not it is a mistake.


A warning heuristic needs to have a low false positive rate; a low false negative rate is nice but is not necessary. The purpose of a warning is to detect some common errors without inconveniencing too many correct programs. If some other errors go undetected then that is a shame but at least it is no worse than the current situation.


How do you do that?

The preprocessor does macro expansion, the compiler compiles the result.

The compiler does not see FLAG_BIT1 ^ FLAG_BIT3, it only sees the result, e.g. 2^32.

Therefore to catch only explicit 2^32 the warning should come from the preprocessor... Nice mess created right there.


Modern compilers do have insight into the source before it was preprocessed. You'll notice that clang and (modern) gcc will show you where the error is in the unexpanded source line, and then show the expansion of the macros. So when it detects "2^32", it can look back to see if it was a product of macro expansion or if the literal was directly written, and warn accordingly.

Interestingly, msvc can also do this, by virtue of not even having a distinct preprocessor phase at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: