Haskell doesn’t have side effects, but it does have effects. This is a subtle distinction that is a bit hard to explain but I’ll give it my best shot:
One way to think of Haskell is like a pure functional C preprocessor that also happens to be lazy. When Haskell code is running, pure functional expressions are being evaluated and reduced to their simplest form. This simplest form is not a plain value but actually a program that when run has effects (reading input, printing to the screen) just like a C program.
The last piece of the puzzle is that, unlike the C preprocessor, Haskell doesn’t do all of the evaluation ahead of time. The evaluation of these pure functional expressions is interwoven with the execution of the program producing the desired effects. Haskell’s type system is used to express the dependencies between inputs and outputs so that the effects are performed in the correct sequence.
There was a somewhat tongue-in-cheek article a while back saying "C has no side effects" with the premise being That "when people say C, they really mean CPP, because nobody writes C with the preprocessor disabled" and since CPP is side-effect free, colloquially we can say that C is side-effect free...
The context is she is using Haskell type declarations as if it was a regular programming language. Arguable the type declarations are interpreted, and dynamically typed in the sense that there is no static type checking before the interpretation.
Hum... You meant to say that the C compiler is pure. And well, yes, it is. Even when you include the compiler itself, instead of only the preprocessor.
You can notice on the GP's description that Haskell will generate those programs at runtime. The Haskell code is running on the same level as the C compiler, but at a very different time.
This is a very convolution way of saying that Haskell can have side effects, but some parts of the code is pure. This is also similar to any other languages. Any code is side-effect free until it is executed.
The actually interesting part of Haskell is that the type system is strong enough to indicate which sections of the code is pure and which are unpure.
That's allowed in the IO monad, and, really, it's allowed. The idea in Haskell is to have many contexts where side effects are not allowed so that in those contexts your code must be pure, and pure code is easier to write tests for and reason about.
Haskell is two languages. There's the expression language which is pure. Then there's the runtime system which takes descriptions of side effects generated by the expression language (the IO type) and executes them in the real world.
I remember my surprise on a DEC Alpha in late 2001 when I did something like acos(3) and the floating-point exception triggered a core dump. As I recall, on the other OSes I used, SIGFPE was by default ignored.
Well, this is not true. However, to be fair, most important use case of errno in my work is debugging in horrible code bases, trying to understand what has happened.
The FFI wrappers can restore the previous value of errno after reading the newer value after calling whatever foreign function, thus undoing that side-effect, and nothing other than a signal handler could observe this side-effect in the middle of the FFI wrapper's execution.
Oh, note to self: signal handlers need to restore errno values if they call async-signal-safe functions that can set errno.
Not all C api calls produce (observable) side effects. As a very simple example, the sin() function from math.h is a pure function of type `Double -> Double`. No need to wrap that in IO.
Haskell is not about removing side effects in the first place-- it's about making them obvious and contained, and about declaring them. It's about the type system imposing certain constraints on what's allowed and where: none of the above really applies to C.
sin(±infinity) returns a NaN and raises the "invalid"
floating-point exception.
However, I can't figure out how to make that happen, and my clumsy attempts to figure out <fenv.h> and feraiseexcept() failed to make sin(INFINITY) do something other than return a NaN.
Raising a floating-point exception generally means that the exception bit in the floating-point status register is set, and you can test this bit with fetestexcept.
Some architectures (but not all) do include some bits in the floating-point control register that will cause a hardware trap to be generated on an exception. There's no standard C function to do this, and even compiler builtin support is kind of sketchy, so your best bet is usually resorting to inline assembly to do this.
I'd be concerned about whether the behavior of such a pattern is actually well defined. Like for instance what if the optimizer is super clever and evaluates your sin(infinity) at compile time? Or what if it undoes your bit-set? Or what if it moves the computation before the bit set?
There are essentially three different flavors of floating-point models: strict, precise, and fast. Which one you get is dependent primarily on compiler flags, although most compilers will default to "precise" (a few default to "fast", but they really shouldn't).
The strict model requires that everything be evaluated at runtime and that the optimizer be aware of the potential for floating-point operations to have interactions with floating-point control and status words (i.e., rounding modes and fp exceptions)--essentially all optimizations are floating-point are forbidden.
The precise model allows compilers to ignore the dependency on rounding mode and fp exceptions, and treat floating-point optimizations as side-effect-free pure expressions, but still requires them to generally follow IEEE 754 arithmetic. As such, standard optimizations like code motion or constant propagation are legal, but arithmetic optimizations (like replacing x + 0.0 with x) are not generally legal.
The fast model allows compilers to also apply the arithmetic optimizations.
Most people who actually care about this level of detail are aware of the panoply of compiler options that are used to control floating-point optimization in the compiler.
The other case it doesn't work is that x + 0.0 is defined to quiet an sNaN (because almost every fp operation turns sNaN to qNaN)... except even generally strict floating-point models generally say "fuck you" to sNaN and treat sNaN and qNaN as interchangeable.
(Speaking of fast math, my planned blog post title if I ever write all this up into a blog post is "Floating Point, or How I Learned to Start Worrying and Hate Fast Math.")
> and I'm pretty sure that NaN addition, while resulting in a NaN, does not need to preserve the original bit pattern.
"it be complicated"
Except for certain operations (fneg, fcopysign, and fabs are all defined to only affect the sign bit, to the point that they're an exception to the sNaN-always-signals rule; and there are a few operations defined to operate on the NaN payload information), IEEE 754 generally says no more than a qNaN is generated. There's a recommendation that NaN payloads propagate, and most hardware does do that, but compilers actually typically don't even guarantee that sNaNs are not produced by operations.
Yes, I think I was using outdated/20-year-old experience about how on the DEC Alpha a domain error like this triggered a SIGFPE, and got confused between different ideas of what "exception" means.
You should be able to see the floating point exception by testing fetestexcept(FE_INVALID) (if it returns nonzero/true then you have a domain error). errno is also set as a side effect.
(edit), for instance this gives me output of "1":
#include <math.h>
#include <fenv.h>
#include <stdio.h>
int main(void)
{
double a = sin(INFINITY);
printf("%d\n", fetestexcept(FE_INVALID));
return 0;
}
What compilers are you using, what flags are you using with those compilers?
If you care about it, you need to use the appropriate flags to get a strict floating-point model. If you merely have a precise or fast floating-point model, calling fetestexception is going to result in unspecified behavior.
[If you really want to be strict, the result of flags are unspecified if you don't use #pragma FENV_ACCESS ON, but gcc refuses to support FENV_ACCESS pragma, and you instead need to use command-line flags to get the same effect.]
> I think I have an outdated idea that there should be a way to have the exception trigger a SIGFPE, and that modern processors don't do that.
On Intel, you can clear the exception mask for the INVALID FP exception in mxcsr and get an SIGFPE whenever an operation raises INVALID [assuming, again, strict model to ensure the optimizer respects fp exceptions.] I don't know how, or if, AArch64 can enable hardware traps on floating-point exceptions.
You need to enable floating point trapping. It's trivial, but it's not on by default.
Although, in that case, this function would still be pure as far as Haskell is concerned. Because its output only depends on its input. Interrupts/exceptions are outside of the scope of the type system as far as an individual function goes.
errno can be set upon failure, and also the behaviour is dependant on the global floating point environment (although passing this around with you would be rather cumbersome)
Calling Rust functions from Haskell FFI:
https://engineering.iog.io/2023-01-26-hs-bindgen-introductio...