This isn't C. The expectations have changed thanks to scripting languages. If I'm supposed to use this promising language to do calculations and manipulate data, I'd expect to be able to natively handle large numbers without overflowing.
Beacause it would make writing fast code inconvinient, it's a tradeoff, a right choice clearly imo. How often do you need BigInts?
I try to use Julia instead of Python now if I don't need specific libs, it definitey feels like a nicer language to me (i.e. I can get stuff done faster).
Julia supports arbitrary precision arithmetic just fine, but you need to explicitly use it. Overflow checks don't matter in Python or Ruby where everything is relatively slow.
I don't think many here argue that Julia doesn't support it. In fact it would be rather disastrous if it didn't, because it wouldn't make for a very nice scientific language if the biggest int value was 2^64-1.
The question is why aren't large numbers the default, or why doesn't it do automatic up and down conversion and so on.
> because it wouldn't make for a very nice scientific language
Scientific languages generally don't use aribitrary precision arithmetic. Examples: MATLAB, scipy/numpy/pandas. The reason is that they're optimized for the key use case of large linear algebra calculations. Making arbitrary precision the default would require an overflow or type/tag test in the inner loop dramatically reducing performance. The goal of these languages is to operate at near peak cpu bandwidth.
When wide ranging precision is needed floats are used despite their flaws, because again, the performance matters so much. FEM, CFD and similar engineering calculations can effectively use as much computation as you have hardware and patience for. Machine Learning will too if you're dataset is larger than trivial.
The Julia folks know what they're doing and it's what the community they're targeting expects.
My point wasn't that it can't be done. My point was about usability and what programmers have come to expect from a modern language. I understand that things get trickier when trying to implement arbitrary-precision arithmetic while keeping performance high.
My guess is that it would lead to at least 10x slowdown in very maths heavy code. So, considering that the main goal of Julia was performance, it doesn't make any sense. And you can easily convert to arbitrary precision with big(1234).
> Do you have an example of a language with arbitrary-precision overflow by default and comparable performance?
No, but that also means Julia isn't much special then either right?
Maybe it is just me but Julia positions itself as a better Python not just a better C, in that position, can you really blame people if they misunderstand and expect same "high level" behavior from it?
>No, but that also means Julia isn't much special then either right?
That's a really dumb conclusion. It's special in other things it offers (from a better Matlab like language with crazy ass speed to homoiconicy and great FFI). Who said it's only special if it fulfils some specific rainbow-unicorn pipe dream?
It also doesn't read the programmer's thought -- so not special in that regard either.
>Maybe it is just me but Julia positions itself as a better Python not just a better C, in that position, can you really blame people if they misunderstand and expect same "high level" behavior from it?
Lots of people also find Go a "better Python", and Go is like assembly compared to Julia...
Better Python in the sense of Python for scientific computing: eg Pandas et all. They aren't targeting all the use cases of Python (for example, I doubt anyone would suggest writing a monit style tool in Julia).
(defun double (x)
(+ x x)) ; BigInts (or floats, for that matter)
(defun double (x)
(declare (type fixnum x))
(+ x x)) ; will emit an error if X is more than a long
(defun double (x)
(declare (type fixnum x)
(optimize speed (safety 0)))
(the fixnum (+ x x))) ; trusts that everything is a long - basically like C