Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure I understand the problem, I meant that in PureScript there would be distinct number types, and the PureScript compiler would forbid me to assign an int field with a double value. The generated JS would be all regular JS "numbers" (ie floating point throughout).

I mean there are several compilers from langs with strong type systems, such as F#->JS with FunScript, and in those, usually the source lang has integers...




What happens when you take two "integer" fields and add or multiply them? The compiler would need to check that the result still fits in an integer.


No compiler in existence makes these kinds of checks on anything except maybe literal values in special cases. What you propose would entail evaluating the program's runtime behavior at compile time. Ensuring anything that gets put in an integer field is an integer is easy assuming you have type annotations on everything.

For the cases you mention the checks would simply be: integer * integer = integer, integer + integer = integer. Problems only arise when real/floating point numbers are introduced and how to handle integer division.


>No compiler in existence makes these kinds of checks

If your claim is that no compiler in existence can make these kinds of checks at compile time, that's not true. Any dependently-typed language can do this (Idris, Coq, Agda, ATS). In fact, even a language with refinement types can do this, like Liquid Haskell.

And a number of languages do these checks at runtime: Ada and Nimrod are two that come to mind.


Assuming you mean two 32 bit integers, the result is a new 32bit integer that may have overflown. The compiler doesn't care in any language I know of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: