Should you represent the `i` as an int or should you represent it as a double?
Depends on what you do with it!
Say that the body of the loop is just `foo(i)` and you don't know what `foo` is at compile time. Whether it's better for `i` to be an int or a double depends on what `foo` does:
- If `foo` uses `i` as an array index, then you absolutely want `i` to be an int.
- If `foo` uses `i` for float math (say, multiplies it by 1.5), then you absolutely want `i` to be a double.
- If `foo` does a mix of those two things, then who knows. It depends on what happens more.
So, even in your simple example, it's not obvious what types to use when compiling JavaScript.
Should you represent the `i` as an int or should you represent it as a double?
Depends on what you do with it!
Say that the body of the loop is just `foo(i)` and you don't know what `foo` is at compile time. Whether it's better for `i` to be an int or a double depends on what `foo` does:
- If `foo` uses `i` as an array index, then you absolutely want `i` to be an int.
- If `foo` uses `i` for float math (say, multiplies it by 1.5), then you absolutely want `i` to be a double.
- If `foo` does a mix of those two things, then who knows. It depends on what happens more.
So, even in your simple example, it's not obvious what types to use when compiling JavaScript.