That page is just sad when compared to C/C++/Java.
When Haskell is pushing trillions (downloads/cash/views/ads) every year - call me.
Furthermore:
> Bad science. The Haskell community generate an unprecedented amount of bad science, setting out to draw the conclusion that Haskell is great and using every trick in the book to make that conclusion seem feasible. In many cases, it requires quite some effort to uncover the truth. For example, Saynte published results for "naively" parallelized ray tracers and concluded that Haskell was the easiest language to parallelize efficiently, requiring only a single line change. This deception was perpetrated by starting with a "serial" version that had already been extensively reworked in order to make it amenable to parallelization and then disabling optimizations only for the competitors (and even adding their compile times in one case!) in order to make Haskell appear competitive. Moreover, that reworking could only have been done on the basis of extensive benchmarking and development. Even the academics publishing Haskell research are up to it. For example, in the recent paper Regular Shape-polymorphic parallel arrays Manuel Chakravarty et al. artificially close the gap between Haskell and C by benchmarking only cache ignorant algorithms (that spend all of their time stalled on unnecessary cache misses), incorrectly states that such algorithms are "widely used", describes the one-line change required to parallelize the C code as "considerable additional effort" despite the fact that both the serial and parallel C solutions are substantially more concise than their Haskell counterparts, refused to provide me with their C code so that I could try to verify their findings (until I published this blog post, their code is now here) and cherry picked Haskell's highest performing results as a function of the number of threads (which peaks at an unpredictable number before seeing performance degradation). When I voiced my concerns, Don Stewart abused his moderator status on the forum by deleting my criticisms. To date, these flaws in the paper have not been addressed.
Are you seriously referencing Harrop to support your argument? You do realize he is a troll right? Not like, people disagree with him, an actual, original definition troll. He jumps from language to language as people learn to ignore him, and has gone through trolling every major functional programming language except F#, which is the one he currently says is perfect and ocaml is shit. Before that, ocaml was perfect and haskell was shit. Before that, haskell was perfect and lisp was shit. Google his name, there's literally thousands of messages from him in mailing list archives demonstrating this.
When Haskell is pushing trillions (downloads/cash/views/ads) every year - call me.
Furthermore:
> Bad science. The Haskell community generate an unprecedented amount of bad science, setting out to draw the conclusion that Haskell is great and using every trick in the book to make that conclusion seem feasible. In many cases, it requires quite some effort to uncover the truth. For example, Saynte published results for "naively" parallelized ray tracers and concluded that Haskell was the easiest language to parallelize efficiently, requiring only a single line change. This deception was perpetrated by starting with a "serial" version that had already been extensively reworked in order to make it amenable to parallelization and then disabling optimizations only for the competitors (and even adding their compile times in one case!) in order to make Haskell appear competitive. Moreover, that reworking could only have been done on the basis of extensive benchmarking and development. Even the academics publishing Haskell research are up to it. For example, in the recent paper Regular Shape-polymorphic parallel arrays Manuel Chakravarty et al. artificially close the gap between Haskell and C by benchmarking only cache ignorant algorithms (that spend all of their time stalled on unnecessary cache misses), incorrectly states that such algorithms are "widely used", describes the one-line change required to parallelize the C code as "considerable additional effort" despite the fact that both the serial and parallel C solutions are substantially more concise than their Haskell counterparts, refused to provide me with their C code so that I could try to verify their findings (until I published this blog post, their code is now here) and cherry picked Haskell's highest performing results as a function of the number of threads (which peaks at an unpredictable number before seeing performance degradation). When I voiced my concerns, Don Stewart abused his moderator status on the forum by deleting my criticisms. To date, these flaws in the paper have not been addressed.
Source: http://flyingfrogblog.blogspot.ch/2010/05/why-is-haskell-use...
Guess Haskell knows a thing or two about misrepresentation.