I worked on a project that compiled a declarative DSL to both vectorized CPU code and GPU code.
For the vectorized CPU code, I found ISPC generally pleasant to use, but don't be fooled by its similarities to C. It's not a C dialect, and I got burned by assuming any implicit type conversion rules that are identical across C, C++ and Java would also hold for ISPC. The code in question was pretty simple conversion of a pair of uniformly distributed uint64_ts to a pair of normally distributed doubles (Box Muller transform). As I remember, operations between a double to an int64_t result in the double being truncated rather than the int64_t being cast. I wrote some C++ code and ported it to Java more or less without modification, but was scratching my head as to why the ISPC version was buggy. I remember the feeling of the hair standing up on the back of my neck as it dawned on me that the implicit cast rules might be different than those of C/C++/Java.
Seemingly arbitrarily deviating from C/C++ behavior in a language that's so syntactically close is a big footgun. Honestly, I think that C made a mistake. An operation between an integer and a floating point number should result in the smallest floating point type that has mantissa and exponent range both as large and the floating point operand and capable of losslessly representing every value in the range of the integer operand's type. If no such type is supported, then C should have forced the programmer to choose what sort of loss is appropriate via explicit casts. Disallowing implicit type conversion is also reasonable. However, if your language looks and feels so close to C, you really need good reasons to change these sorts of details about implicit behavior.
Similarly, I think C should have made operations between signed and unsigned integers of the same size to result in the next largest sized signed integer (uint32_t + int32_t = int64_t), or just not allow operations between signed and unsigned types without explicit casts.
I very strongly think that rust went the right route here:
error[E0277]: cannot add a float to an integer
--> src/main.rs:2:6
|
2 | 1+1.0;
| ^ no implementation for `{integer} + {float}`
Casts are historically such a massive source of bugs and unstability that the correct casting rule for every numerical computation is always: "Make the programmer explicitly choose."
I disagree, while C signed/unsigned type conversion is a source of bug and should be explicit, it doesn't mean that all implicit conversion are bad..
I think that implicit conversion from int to float is mostly harmless, as the result is a float and the float to int conversion is explicit, this is unlikely to create bugs.
intX to intY or uintX to uintY if Y>=X are two other conversions which are unlikely to create bugs but save a lot of boilerplate.
Interestingly enough, most of the boilerplate and straight jacket rules from Algol language stream, that cowboy programmers used to complain about, are being adopted again thanks to the age of rising CVEs in always connected devices.
I agree that implicit conversions in C/C++ are something that ideally wouldn't exist.
IMO, though, the only way to avoid having those rules is by requiring explicitness on the part of the programmer. All other options just substitute a different set of rules which for some cases will be expected and in other cases will be unexpected.
For the vectorized CPU code, I found ISPC generally pleasant to use, but don't be fooled by its similarities to C. It's not a C dialect, and I got burned by assuming any implicit type conversion rules that are identical across C, C++ and Java would also hold for ISPC. The code in question was pretty simple conversion of a pair of uniformly distributed uint64_ts to a pair of normally distributed doubles (Box Muller transform). As I remember, operations between a double to an int64_t result in the double being truncated rather than the int64_t being cast. I wrote some C++ code and ported it to Java more or less without modification, but was scratching my head as to why the ISPC version was buggy. I remember the feeling of the hair standing up on the back of my neck as it dawned on me that the implicit cast rules might be different than those of C/C++/Java.
Seemingly arbitrarily deviating from C/C++ behavior in a language that's so syntactically close is a big footgun. Honestly, I think that C made a mistake. An operation between an integer and a floating point number should result in the smallest floating point type that has mantissa and exponent range both as large and the floating point operand and capable of losslessly representing every value in the range of the integer operand's type. If no such type is supported, then C should have forced the programmer to choose what sort of loss is appropriate via explicit casts. Disallowing implicit type conversion is also reasonable. However, if your language looks and feels so close to C, you really need good reasons to change these sorts of details about implicit behavior.
Similarly, I think C should have made operations between signed and unsigned integers of the same size to result in the next largest sized signed integer (uint32_t + int32_t = int64_t), or just not allow operations between signed and unsigned types without explicit casts.