> I'm interested in general purpose programming, so as useful and interesting as your explanation of synchronous languages and TLA+ are, they are not relevant to me.
There's nothing non-general-purpose in that approach. See, e.g., the front-end language Céu[1], by the group behind Lua (I think). The short video tutorial on Céu's homepage can give you a good sense of the ideas involved (esp. with regards to effects), and their very general applicability. I find that just as the functional approach is natural for data transformation, the synchronous approach is natural for composing control structures and interaction with external events. I think it's interesting to contrast that language with Elm, that targets the same domain, but uses the PFP approach. The synchronous approach in Céu is imperative (there are declarative synchronous languages, like Lustre, that feel more functional) and allows mutation, but in a very controlled, well understood way. The synchronous model is very amenable to formal reasoning, and has had great success in the industry.
It's just that hardware and embedded software has always been decades ahead of general-purpose software when it comes to correctness and verification, simply because the cost difference between discovering bugs in production and bugs in development has always been very clear to them (and very big to boot). There have been several attempts at general-purpose GALS languages (see SystemJ[2], a GALS JVM language, which seems like a recent research project gone defunct). OTOH, I believe Haskell would also be considered by most large enterprises to not be production-quality just yet.
Also, I believe that spending a day or two (that's all it takes -- it's much simpler than Haskell) to learn TLA+ would at least get you out of the typed-functional mindframe. Not that there's anything wrong with the approach (aside from a steep learning curve and general distaste in the industry), but I am surprised to see people who are into typed-pure-FP who come to believe that this is not only the best, but the only approach to write correct software, while, in fact, it is not even close to being the most common one. In any event, TLA+ is very much a general purpose language — it’s just not a programming language — and it will improve your programs regardless of the language you use to code them: it is specifically designed to be used alongside a proper programming language (it is used at Amazon, Oracle, Microsoft and more for large, real-world projects). What's great is that it helps you find deep bugs regardless of the programming language you're using, it's very easy to learn, and I find it to be a lot of fun.
> I am interested, though, in your thoughts on effects, monads and continuations.
Hmm, I’m not too sure what more I can add. Any specific questions? Basically, anything that a language chooses to define as a side-effect (and obviously IO, which is “objectively” a side effect) can be woven into a computation as a continuation. The computation pauses; the side effect occurs in the “world”; the computation resumes, optionally with some data available from the effect. Continuations naturally arise from the description of computation as a process in all exact computational models, but in PFP computation is approximated as a function, not as a continuation. To mimic continuations, and thus interact with effects, a PFP language may employ monads, basically splitting the program/subroutine into functions that compute between consecutive “yield” points, and the monad’s bind that serves as the effect. Due to the insistence of such languages on the function abstraction, having the subroutine return just a single value, composing multiple monads can be challenging, cumbersome and very not straightforward. Languages that aren’t so stubborn may choose to have a subroutine declare (usually if the language is typed, that is) a normal return value, plus multiple special return values whose role it is to interact with the continuation’s scope. An example of such a typed event system is Java’s checked exceptions. A subroutine’s return value interacts with its caller in the normal fashion, while the declared exceptions interact with the continuation’s scope (which can be anywhere up the stack) directly. This normally results in a much more composable pattern, and one that is simpler for most programmers to understand.
> Does your notion of "continuation" require threads? If so, Python fails to have "continuations", right?
"My" notion of continuation requires nothing more than the ability of a subroutine to block and wait for some external trigger, and then resume. Languages then differ in the level of reification. Just as you can have function pointers in C, but that reification is on a much lower level than in, say, Haskell or Clojure, so too languages differ in how their continuations are reified. So, a language like Ruby, is single-threaded and does not reify a continuation at all (I think). You can't have a first-class object which is a function blocked, waiting for something. Python, I think, has yield, which does let you pass around a subroutine that's in the middle of operation, and can be resumed. In Java/C/C++ you can reify a continuation as a thread (inefficient due to implementation). In Go you can do that only indirectly, via a channel (read on the other end by a blocked lightweight thread). In Scheme, you can have proper reified continuations with shift/reset (and hopefully in Java, too, soon, thanks to our efforts).
There's nothing non-general-purpose in that approach. See, e.g., the front-end language Céu[1], by the group behind Lua (I think). The short video tutorial on Céu's homepage can give you a good sense of the ideas involved (esp. with regards to effects), and their very general applicability. I find that just as the functional approach is natural for data transformation, the synchronous approach is natural for composing control structures and interaction with external events. I think it's interesting to contrast that language with Elm, that targets the same domain, but uses the PFP approach. The synchronous approach in Céu is imperative (there are declarative synchronous languages, like Lustre, that feel more functional) and allows mutation, but in a very controlled, well understood way. The synchronous model is very amenable to formal reasoning, and has had great success in the industry.
It's just that hardware and embedded software has always been decades ahead of general-purpose software when it comes to correctness and verification, simply because the cost difference between discovering bugs in production and bugs in development has always been very clear to them (and very big to boot). There have been several attempts at general-purpose GALS languages (see SystemJ[2], a GALS JVM language, which seems like a recent research project gone defunct). OTOH, I believe Haskell would also be considered by most large enterprises to not be production-quality just yet.
Also, I believe that spending a day or two (that's all it takes -- it's much simpler than Haskell) to learn TLA+ would at least get you out of the typed-functional mindframe. Not that there's anything wrong with the approach (aside from a steep learning curve and general distaste in the industry), but I am surprised to see people who are into typed-pure-FP who come to believe that this is not only the best, but the only approach to write correct software, while, in fact, it is not even close to being the most common one. In any event, TLA+ is very much a general purpose language — it’s just not a programming language — and it will improve your programs regardless of the language you use to code them: it is specifically designed to be used alongside a proper programming language (it is used at Amazon, Oracle, Microsoft and more for large, real-world projects). What's great is that it helps you find deep bugs regardless of the programming language you're using, it's very easy to learn, and I find it to be a lot of fun.
> I am interested, though, in your thoughts on effects, monads and continuations.
Hmm, I’m not too sure what more I can add. Any specific questions? Basically, anything that a language chooses to define as a side-effect (and obviously IO, which is “objectively” a side effect) can be woven into a computation as a continuation. The computation pauses; the side effect occurs in the “world”; the computation resumes, optionally with some data available from the effect. Continuations naturally arise from the description of computation as a process in all exact computational models, but in PFP computation is approximated as a function, not as a continuation. To mimic continuations, and thus interact with effects, a PFP language may employ monads, basically splitting the program/subroutine into functions that compute between consecutive “yield” points, and the monad’s bind that serves as the effect. Due to the insistence of such languages on the function abstraction, having the subroutine return just a single value, composing multiple monads can be challenging, cumbersome and very not straightforward. Languages that aren’t so stubborn may choose to have a subroutine declare (usually if the language is typed, that is) a normal return value, plus multiple special return values whose role it is to interact with the continuation’s scope. An example of such a typed event system is Java’s checked exceptions. A subroutine’s return value interacts with its caller in the normal fashion, while the declared exceptions interact with the continuation’s scope (which can be anywhere up the stack) directly. This normally results in a much more composable pattern, and one that is simpler for most programmers to understand.
> Does your notion of "continuation" require threads? If so, Python fails to have "continuations", right?
"My" notion of continuation requires nothing more than the ability of a subroutine to block and wait for some external trigger, and then resume. Languages then differ in the level of reification. Just as you can have function pointers in C, but that reification is on a much lower level than in, say, Haskell or Clojure, so too languages differ in how their continuations are reified. So, a language like Ruby, is single-threaded and does not reify a continuation at all (I think). You can't have a first-class object which is a function blocked, waiting for something. Python, I think, has yield, which does let you pass around a subroutine that's in the middle of operation, and can be resumed. In Java/C/C++ you can reify a continuation as a thread (inefficient due to implementation). In Go you can do that only indirectly, via a channel (read on the other end by a blocked lightweight thread). In Scheme, you can have proper reified continuations with shift/reset (and hopefully in Java, too, soon, thanks to our efforts).
[1]: http://ceu-lang.org/
[2]: http://dl.acm.org/citation.cfm?id=1823324