> Not guaranteed STM. If you do IO in Clojure's STM, you just get a runtime error
Two things. 1/ "Guaranteeing" effects has little to do with PFP. 2/ There's no data to support that this level of guarantees has any effect on program quality. I'm all in favor of effect systems (though not PFP) because they're worth a try; but there's a long way to go from "interesting" and "it actually works!" especially if you have no data to support this.
> Data of this sort is extremely expensive to collect reliably.
Fine, but the alternative is to use an unproven, negligibly adopted (Haskell industry adoption rates are between 0.01-0.1%), badly tooled language, that requires a complete paradigm shift on faith and enthusiasm alone. I don't need, or want, to prove that Haskell isn't effective (TBH, I really wish Haskell, or any other novel approach did have some big gains); it is Haskell's proponents that need to support their claims with at least some convincing evidence.
> Is that the one?
I don't remember, but what does it matter? Again, I don't want to prove Haskell's ineffectiveness; it's people who want to convince others to use Haskell that should collect some evidence in its favor.
I started off by explaining why Haskell needs monads and why they don't add power, and later why PFP and restricting what code can do are orthogonal. Peaker then spoke of tangible gains, and I said that something is not tangible if you can't show it, and then you nudged the thread in the direction of your pet project which, apparently, is patronizing people.
More to the point, if you claim Haskell (or, in particular, monads) has theoretical benefits, you need to be able to explain them (and restricting effect is not a theoretical explanation as it doesn't require monads); if your explanation is "this has benefits in practice" then you're really claiming empirical, rather than theoretical, benefits, but then you need to be able to support those. If you go around saying monads have theoretical benefits but when debated claim empirical benefits and then don't support those, expect to be called out for selling snake-oil (and just to be clear, my point can be summarized as follows: Haskell takes a very clear, very opinionated theoretical approach[1], which is beautifully elegant but is not theoretically better or worse, just very different, with its particular pros and cons. Empirically, I claim, Haskell has not yet shown significant benefits).
[1]: Subroutines as functions; mutation as effect; HM types (+ typeclasses). Type system aside, there are obviously many alternatives (other than "classical" empirical languages). For example, languages that require full verification for safety-critical realtime code often employ the synchronous model. In that model, each subroutine isn't a function (but a continuation), but the program itself can be viewed as a function from one program state to the next, and mutation isn't a side-effect, but is very much controlled (see https://who.rocq.inria.fr/Dumitru.Potop_Butucaru/potopEmbedd...). This is a model that lends itself very nicely to formal reasoning, and there are others.
> if your explanation is "this has benefits in practice" then you're really claiming empirical, rather than theoretical, benefits, but then you need to be able to support those
I disagree strongly with your position on this.
Peaker has clearly found, as I have, that Haskell is more effective for him. We have pointed out repeatedly what the benefits are (as well as pointed out the drawbacks). It would be simply impossible for the stated benefits not to be beneficial in practice. The only question is whether the benefits are outweighed by the drawbacks. If you have already paid the one-off cost of learning the gnarly corners of Haskell then those drawbacks are significantly diminished.
You must realise that your insistence on empirical research is an idosyncracy and that people make decisions on programming languages all the time without such research. They are not, in its absence, merely "guessing" which language to use. They are making a decision based on their understanding of their own needs and on the strengths of the languages under consideration.
Furthermore, I wouldn't trust any empirical research on languages any more than I would trust empirical research on cholesterol[1].
In the absense of convincing empirical research pointing in either direction I think people should be free to make informal claims that "Haskell is more reliable than Python and more productive than Ada" based on their own experience and the experience of their colleagues. It's all we've got to go on. It's not ideal, but it's also not wrong.
> It would be simply impossible for the stated benefits not to be beneficial in practice. The only question is whether the benefits are outweighed by the drawbacks. If you have already paid the one-off cost of learning the gnarly corners of Haskell then those drawbacks are significantly diminished.
Provided that you think that the only significant drawback is the learning curve. I think that the PFP abstraction itself is a drawback, and that you can get most of Haskell's benefits (leaving aside their real-world value for a moment) without it. In particular, I think that you cannot point at any real-world benefits of the Haskell approach over, say, the OCaml approach, and that's even before applying things like effect systems to OCaml.
> people make decisions on programming languages all the time without such research
Of course, but if they do, they cannot claim real benefits that aren't real. The reason some people use Haskell is that it fits well with how they like to think about and write code; the reason many people don't use Haskell is because it doesn't fit with their preferred style, and because there is no compelling evidence for why they should even try to change their methodology.
My only "insistence" is that you either claim actual benefits and present empirical data to support it, or don't present empirical data but claim only personal preference. What you don't get to do is say, "Haskell leads to code with significantly fewer serious bugs" while at the same time not show any evidence that it does. The reason you don't get to do that is that such a claim is one with serious (theoretical and financial) implications, and strong claims require strong evidence (or at least some more convincing evidence than what we have).
> Furthermore, I wouldn't trust any empirical research on languages any more than I would trust empirical research on cholesterol
OK, but you're saying that I should go full vegan based on even less than that.
> It's not ideal, but it's also not wrong.
How do you know it's not wrong?
But let me refine this. I agree that it's very likely that "Haskell is more reliable than Python and more productive than Ada", but that's not the real argument. The real argument is that Haskell is a lot more reliable than Python and a lot more productive than Ada. I don't see how you can possibly claim that based just on personal experience.
But let me refine this further: there's personal experience and personal experience. There's personal experience based on measuring actual project costs and comparing them -- even though projects are not exactly comparable -- now that's not ideal but not wrong, and there's personal experience based on gut feeling. I don't know how you can say that that's not wrong. Also, there's collected personal experience from hundreds of projects in many domains and various sizes -- that's not ideal but not wrong -- and there's personal experience from a handful of projects, nearly all quite small, in one or two domains. I don't know how you can say that's not wrong (unless you qualify the domain, which you don't).
At this point in time, all we can say about Haskell is this: some people greatly enjoy Haskell's programming paradigm; some people report possibly significant but not big gains in the handful of medium-to-large production projects where the language has been used. So far the approach is showing some (though not great) promise and requires further consideration.
> Provided that you think that the only significant drawback is the learning curve.
No, you misread me. I acknowledge other significant drawbacks, such as immaturity of tooling and infrastructure.
> I think that the PFP abstraction itself is a drawback, and that you can get most of Haskell's benefits (leaving aside their real-world value for a moment) without it.
OK, that would be great! I am genuinely interested in understanding how to do that. I would love to see PFP as a drawback and obtain its benefits without requiring its rigors. So far I have failed to understand your ideas about how to do that and I can only continue to see PFP as a massive boon.
> In particular, I think that you cannot point at any real-world benefits of the Haskell approach over, say, the OCaml approach, and that's even before applying things like effect systems to OCaml.
(One of) the real-world benefit(s) is that I can write a substantial part of a program and know from inspecting only a single line (its type signature) what effects it performs. How can OCaml give me that benefit?
> What you don't get to do is say, "Haskell leads to code with significantly fewer serious bugs" while at the same time not show any evidence that it does. The reason you don't get to do that is that such a claim is one with serious (theoretical and financial) implications, and strong claims require strong evidence (or at least some more convincing evidence than what we have).
I think that says more about how you interpret informal comments on the internet than it does about those making the comments.
> > Furthermore, I wouldn't trust any empirical research on languages any more than I would trust empirical research on cholesterol
> OK, but you're saying that I should go full vegan based on even less than that.
Interesting. Where did I say you should go (the equivalent of) "full vegan"?
> > It's not ideal, but it's also not wrong.
> How do you know it's not wrong?
At least, it is not known to be wrong.
> But let me refine this.
[... snipped useful elucidation ...]
> At this point in time, all we can say about Haskell is this: some people greatly enjoy Haskell's programming paradigm; some people report possibly significant but not big gains in the handful of medium-to-large production projects where the language has been used. So far the approach is showing some (though not great) promise and requires further consideration.
Entirely agreed. I think my gripe with you at this point is that you read too much in to people's informal claims and cause unnecessary aggravation by derailing threads. It's quite clear that you could actually contribute constructively, so I wish you would. Please can you explain in detail how I can get the benefits of PFP without its drawbacks?
> I would love to see PFP as a drawback and obtain its benefits without requiring its rigors.
> (One of) the real-world benefit(s) is that I can write a substantial part of a program and know from inspecting only a single line (its type signature) what effects it performs. How can OCaml give me that benefit?
That's a property, not a real-world benefit. It's like saying that one of the real-world benefits of Haskell is that its logo is green. In any case, don't know about OCaml, but in Java I just click a button, and get the call tree for the method (or, inversely, get the reverse tree for all methods eventually calling printf). More to the point, see some of Oleg Kiselyov's work on type systems for continuations here: http://okmij.org/ftp/continuations/
> I think that says more about how you interpret informal comments on the internet than it does about those making the comments.
I think that you should decide whether this is a serious discussion or grandstanding.
> Where did I say you should go (the equivalent of) "full vegan"?
Maybe you didn't say that I should, but you did say that it's better to be vegan (i.e. make a very significant "lifestyle" change without any evidence to its effectiveness).
> At least, it is not known to be wrong.
That's true. But I wouldn't go around telling people that being vegan has great health benefit if all we know is that it's not been found to kill you.
> I would love to see PFP as a drawback and obtain its benefits without requiring its rigors. So far I have failed to understand your ideas about how to do that and I can only continue to see PFP as a massive boon.
OK, but first, a few things: 1/ I don't know if by "rigors" you meant difficulties or rigorousness, but if the latter, then I don't see why you conflate rigor with the pure-functional abstraction. Most formally verified, safety-critical software is written in languages that are far more rigorous than Haskell, yet are not pure-functional. Which leads us to 2/ these are not "my ideas"; if correctness is your goal (as it seems to be), most languages guaranteeing correctness do not espouse the PFP abstractions. Haskell, Coq, Idris and Agda are used far less than other approaches to ensuring software correctness. Finally, 3/ I'd like to be careful when I say "benefits", because we don't know whether they are true benefits, neutral or even detrimental to software at large. All I can say that in this context, when I say "benefits" I mean things that I (and you) believe to be positive and see as potentially advantageous in the "real world".
Now, I will give you two examples (of languages used more than Haskell/Coq etc.) for "correct" languages, both of them are very rigorous in the sense of being completely formal, yet they do not suffer from PFP's downsides mainly by being measurably much easier to learn/teach/adopt. They are not generally applicable, but neither is Haskell. The first is the set of synchronous langauges, now used by the industry to design safety-critical realtime software, as well as a lot of hardware. Instead of PFP, it relies on what's known as the "synchronous hypothesis". It has been proven over three decades, as an effective, practical method of writing verifiably correct software by "plain" engineers in hundreds of critical real-world systems. You can read more about it here[1]. A generalization of the approach is called Globally Asynchronous, Locally Synchronous, or GALS, and I believe it has the potential of being a great, more widely applicable way of writing software in a way that lends itself to careful reasoning.
The second (you probably saw it coming), is TLA+. It's not a programming language, so I won't compare it to Haskell but to Coq. Unlike Coq, TLA+ does not rely on PFP, or Curry-Howard (neither do other verification tools, like Isabelle) but goes a step further in not being typed at all. It is not functional yet fully mathematical ("referentially transparent"), and its main advantage is that while having the same "proving strength" as Coq when it comes to verifying algorithms[2] while taking days to learn instead of months, and not requiring any mathematical background more than what any engineer already has. I guess that the answer to "how?" would be "in a manner bearing a lot of resemblance to the synchronous hypothesis".
There are, of course, other formal approaches (like CSP), but synchronous programming in particular has had a lot of success in the industry.
If you want to know about (typed) monads vs. continuations, and their relationship to typed effects, I'll refer you again to my blog post on the subject: http://blog.paralleluniverse.co/2015/08/07/scoped-continuati... and to Oleg Kiselyov's work, which I've linked to above.
[2]: Coq may be more powerful when proving general mathematical theorems, but Coq was designed as a general theorem prover, while TLA+ is a language for verifying software in particular.
Firstly, and briefly, I don't agree with your approach to epistomology. I think we're never going to agree there. Let's just agree to be mutally antagonistic on that front so we can get to the important issue, which is improving software development.
Secondly, I'm interested in general purpose programming, so as useful and interesting as your explanation of synchronous languages and TLA+ are, they are not relevant to me.
I am interested, though, in your thoughts on effects, monads and continuations. I've read everything you've written on the topic including your code on Github (and much of what Wadler and Oleg have written) but I'm afraid I'm no closer to understanding what you're getting at.
Does your notion of "continuation" require threads? If so, Python fails to have "continuations", right?
> I'm interested in general purpose programming, so as useful and interesting as your explanation of synchronous languages and TLA+ are, they are not relevant to me.
There's nothing non-general-purpose in that approach. See, e.g., the front-end language Céu[1], by the group behind Lua (I think). The short video tutorial on Céu's homepage can give you a good sense of the ideas involved (esp. with regards to effects), and their very general applicability. I find that just as the functional approach is natural for data transformation, the synchronous approach is natural for composing control structures and interaction with external events. I think it's interesting to contrast that language with Elm, that targets the same domain, but uses the PFP approach. The synchronous approach in Céu is imperative (there are declarative synchronous languages, like Lustre, that feel more functional) and allows mutation, but in a very controlled, well understood way. The synchronous model is very amenable to formal reasoning, and has had great success in the industry.
It's just that hardware and embedded software has always been decades ahead of general-purpose software when it comes to correctness and verification, simply because the cost difference between discovering bugs in production and bugs in development has always been very clear to them (and very big to boot). There have been several attempts at general-purpose GALS languages (see SystemJ[2], a GALS JVM language, which seems like a recent research project gone defunct). OTOH, I believe Haskell would also be considered by most large enterprises to not be production-quality just yet.
Also, I believe that spending a day or two (that's all it takes -- it's much simpler than Haskell) to learn TLA+ would at least get you out of the typed-functional mindframe. Not that there's anything wrong with the approach (aside from a steep learning curve and general distaste in the industry), but I am surprised to see people who are into typed-pure-FP who come to believe that this is not only the best, but the only approach to write correct software, while, in fact, it is not even close to being the most common one. In any event, TLA+ is very much a general purpose language — it’s just not a programming language — and it will improve your programs regardless of the language you use to code them: it is specifically designed to be used alongside a proper programming language (it is used at Amazon, Oracle, Microsoft and more for large, real-world projects). What's great is that it helps you find deep bugs regardless of the programming language you're using, it's very easy to learn, and I find it to be a lot of fun.
> I am interested, though, in your thoughts on effects, monads and continuations.
Hmm, I’m not too sure what more I can add. Any specific questions? Basically, anything that a language chooses to define as a side-effect (and obviously IO, which is “objectively” a side effect) can be woven into a computation as a continuation. The computation pauses; the side effect occurs in the “world”; the computation resumes, optionally with some data available from the effect. Continuations naturally arise from the description of computation as a process in all exact computational models, but in PFP computation is approximated as a function, not as a continuation. To mimic continuations, and thus interact with effects, a PFP language may employ monads, basically splitting the program/subroutine into functions that compute between consecutive “yield” points, and the monad’s bind that serves as the effect. Due to the insistence of such languages on the function abstraction, having the subroutine return just a single value, composing multiple monads can be challenging, cumbersome and very not straightforward. Languages that aren’t so stubborn may choose to have a subroutine declare (usually if the language is typed, that is) a normal return value, plus multiple special return values whose role it is to interact with the continuation’s scope. An example of such a typed event system is Java’s checked exceptions. A subroutine’s return value interacts with its caller in the normal fashion, while the declared exceptions interact with the continuation’s scope (which can be anywhere up the stack) directly. This normally results in a much more composable pattern, and one that is simpler for most programmers to understand.
> Does your notion of "continuation" require threads? If so, Python fails to have "continuations", right?
"My" notion of continuation requires nothing more than the ability of a subroutine to block and wait for some external trigger, and then resume. Languages then differ in the level of reification. Just as you can have function pointers in C, but that reification is on a much lower level than in, say, Haskell or Clojure, so too languages differ in how their continuations are reified. So, a language like Ruby, is single-threaded and does not reify a continuation at all (I think). You can't have a first-class object which is a function blocked, waiting for something. Python, I think, has yield, which does let you pass around a subroutine that's in the middle of operation, and can be resumed. In Java/C/C++ you can reify a continuation as a thread (inefficient due to implementation). In Go you can do that only indirectly, via a channel (read on the other end by a blocked lightweight thread). In Scheme, you can have proper reified continuations with shift/reset (and hopefully in Java, too, soon, thanks to our efforts).
Two things. 1/ "Guaranteeing" effects has little to do with PFP. 2/ There's no data to support that this level of guarantees has any effect on program quality. I'm all in favor of effect systems (though not PFP) because they're worth a try; but there's a long way to go from "interesting" and "it actually works!" especially if you have no data to support this.
> Data of this sort is extremely expensive to collect reliably.
Fine, but the alternative is to use an unproven, negligibly adopted (Haskell industry adoption rates are between 0.01-0.1%), badly tooled language, that requires a complete paradigm shift on faith and enthusiasm alone. I don't need, or want, to prove that Haskell isn't effective (TBH, I really wish Haskell, or any other novel approach did have some big gains); it is Haskell's proponents that need to support their claims with at least some convincing evidence.
> Is that the one?
I don't remember, but what does it matter? Again, I don't want to prove Haskell's ineffectiveness; it's people who want to convince others to use Haskell that should collect some evidence in its favor.