Hacker News new | past | comments | ask | show | jobs | submit login

C sequence points are arbitrarily imposed by imperial decree in the standard; they are not linked to any evaluation causality.

Dependency-driven de facto evaluation orders have always existed in C though: things that you can figure out must be ordered even though there isn't a sequence point.

The standard spells out that expressions like i = i + i are okay, but actually it's not necessary to do so. The assignment cannot take place until the assigned value is known. The assigned value is not known until the + is evaluated and the + cannot be evaluated until the operand values are retrieved. Therefore, the retrieval of the prior value of i necessarily precedes the movement of the new value into i.

This, rather than sequence points, is rather a bit like monadic sequencing.




> The assignment cannot take place until the assigned value is known.

Of course it can. It's retrieving the value out of the lvalue is what cannot be done and even then you can still stretch this delay further with the "as-if" rule. Haskell does something similar with its lazy evaluation IIRC.


> C sequence points are arbitrarily imposed [and] not linked to any evaluation causality.

That’s exactly my point. Writing g >>= \x -> h >>= \y -> f x y is like writing (x = g(), y = h(), f(x,y)), with an arbitrary ordering between computations g and h that the programmer is forced to choose, whereas f <$> g <*> h is like f(g(), h()), with no ordering between g and h imposed by the applicative model itself, similar to the absence of a sequence point between sibling arguments in C.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: