Haskell functions return side-effects using the IO type, with the boilerplate plumbing being hidden with monads and do-notation. "main" in Haskell by default has a return type of "IO ()", and any "IO" values returned by that function are executed by the runtime.
The end result in this case is something that just looks and feels completely imperative.
but if you were to try and call, say, the "rmdir" function inside another function that didn't have an IO return type, you'd get a compile error. (More specifically, you could technically call the function, you just couldn't return the "IO" value as a result, so it couldn't perform any actions).
Really, the only reason Javascripts future looks very bright is because it has a monopoly - it's the only language that runs in the browser.
Javascript is a great language for what it is, and ES6 fixes a lot of the warts - but it's still a rather mediocre language compared to many of the other options out there.
It's a great quick-n-dirty language for hacking things together, but it's less ideal for more significant projects. (You certainly can maintain large projects in Javascript, but there are much better languages for this task).
> ASM.js will soon make V8 _much_ faster
ASM.js is not Javascript, and it won't make Javascript run any faster. It's a compile target that happens to resemble a syntactically valid subset of Javascript for backwards compatibility purposes.
ASM.js can only really be used as a compile target for unmanaged languages, and acts as a bytecode that compiles down to pure assembly. It's not something that any Javascript developer would write by hand.
> The idea that compile-time checks is much better than runtime checks is also not obvious (Java's and other commercial languages nonsense about type safety aside - Java is no more safe than CL).
Runtime checks will only trip up if you happen to hit a code path the introduces an incorrect type. This may only occur in some extremely rare scenario that you never pick up in testing.
Compile time type checking allows you to prove that your program is definitely type safe, with 100% certainty.
>I am not sure that this assertion is true for user-defined ADTs
Depends what you mean.
It's safe in the sense that you will never get a type error.
In Haskell terms, it's possible to write a function that's not total (e.g. not implemented for all possible data constructors of a type), which can then crash or fail to terminate. For example, "head" will crash on an empty list (duh).
However, this is easy to avoid, and type safety in Haskell always holds true, as do all the other guarantees the compiler makes (like referential transparency).
Well yes and no. GHC does a pretty good job of warning you when you're missing a potential option in a pattern match, and it's generally pretty easy to write code that avoids partial functions as a result. But still, you're right - it's not 100% safety, and will only give you a warning.
There's no reason to throw the baby out with the bathwater though, since 90% safety is a hell of a lot better than 0% safety.
Personally I'm keen to see mainstream languages adopt better totality checking for that exact reason - my fantasy language would enforce that `main` is always a total function*
*(For this fantasy language, I'd probably allow infinite recursion to still exist, since the halting problem is theoretically impossible to solve without introducing a lot of pain to prove that your code will actually terminate, and that level of totality checking is often counter-productive for general-purpose code)
OK, cool. Of course, almost no languages check for totality. Agda is one of the few that does it by default. I think Rust makes you complete all pattern matches too.
However, if you do get a pattern match failure, one of two things is true:
1. You can easily fix it by accounting for all patterns (or adding a default match)
2. Your program model is conceptually broken and you should probably find a new model that accounts for all possible patterns.
Strictly speaking, "totality" is asking for more than just exhaustiveness in pattern matching (all manner of languages let you check exhaustiveness). For totality, you also have to prove termination for all inputs, which means your language (or sub-language) is not Turing complete.
Though these days I've been saying "Turing complete" is a bug, not a feature, provided you can accomplish your aims without it.
> I think Rust makes you complete all pattern matches too.
It does, although for Option and Result, there's .unwrap() which simply exits the program (through fail!()) on None/error. The fact that you can do this is practical, although could potentially train bad habits.
ghc has an RTS option to locate errors of that nature:
-xc
(Only available when the program is compiled for profiling.) When an exception is raised in the program, this option causes a stack trace to be dumped to stderr.
This can be particularly useful for debugging: if your program is complaining about a head [] error and you haven't got a clue which bit of code is causing it, compiling with -prof -fprof-auto and running with +RTS -xc -RTS will tell you exactly the call stack at the point the error was raised.
Not really. The point of Haskell is not to avoid having side effects. The point of Haskell is to allow code to be referentially transparent - this makes it both easier to reason about as a developer, and easier for the runtime to optimise.
Yes, that is altar upon which you sacrifice the abilitity to write print statements to do debugging, and lose the ability to reason about order of execution. But does it really result in more performant code? Every benchmark I've ever seen, more practical languages like Ocaml have come out on top.
Haskell has a slight, but consistent, edge over Ocaml in most of the "benchmarks game" tests, so it's certainly not true to say that Ocaml beats Haskell in all benchmarks (although, certainly there could be other benchmarks where it does). In either case, Haskell is very performant and is competitive with any other mainstream language in performance.
You can write print statements to do debugging (with Debug.Trace), and in practice it's not very hard to work IO into your code when you need it (even if only for temporary debugging or development). Crucially, however, it's much harder to accidentally work IO into your code. The few cases where I really miss print statements "for free" are vastly outweighed by the many cases in impure languages where I'm accidentally mismanaging my mutable state.
Whether it results in more performant code? In some cases yes (the restrictions make it much easier to prove certain compiler optimizations), but that's not really the point. Referential transparency is about making your code more expressive, and easier to reason about, to design, and to safely tweak.
Haskell performance is very good when written by people who know how the compiler works, and know the bytecode they want generated. I.e., if you rewrite a recursive function in a slightly unintuitive way and apply the right strictness annotations, it will compile down to the same bytecode as a for-loop in C.
Idiomatic Haskell is not generally as fast as mutable C/Java/etc. Creating/evaluating thunks is not fast and immutable data structures often result in excess object creation. When you need them, there is no real substitute for unboxed mutable arrays, something Haskell does NOT make easy.
Haskell is one of my favorite languages, the performance story just isn't quite what I want it to be. I do, however, think that there is plenty of room for improvement, i.e. there is no principled reason Haskell can't compete.
This is exactly where I'm at. My biggest problem is that wrapping non-persistent data structures written in C/C++ never seems comes out right in Haskell. You often have to write them in the IO monad, which is the absolute last thing you want for an otherwise general purpose data structure. I think there may be some solution here using linear types, which enforce that a data type is referenced only once at compile time. This would let you avoid being forced to guarantee persistence when all you care about is speed.
This argument may seem more abstract than what you mention, but in fact it gets to the very heart of why there aren't good unboxed mutable arrays in haskell. In truth, there are. You can convert Immutable Vectors (which are lists with O(1) indexing but no mutation) into Mutable Vectors in constant time using unsafeThaw. The problem is that your code is no longer persistent, and you've risked introducing subtle errors. My biggest problem is that the haskell community seems to look at non-persistent data structures as sacrilegious. As a scientific programmer, that makes me feel like maybe learning haskell wasn't such a good investment after all. But on the bright side, functional programming is on the rise, and I'm confident that all my experience with Haskell will transfer well in the future.
Depends a lot on the libraries, too. I had to scrape a bunch of HTML recently, which I prefer to use XPath for; the library I used -- HXT, if I remember correctly, it was the horrible one that uses arrows -- made my program perform on par with Ruby, and when I benchmarked it, I found it was allocating about 2GB of data throughout the program, while parsing a document that was probably around 100KB.
I believe HXT uses the default representation of strings as lists of chars, instead of more efficient packed representations. This likely contributes to the excessive memory usage.
Sure. As a decidely unseasoned Haskell user, however, it's hard to sympathize with inefficient libraries for something as established as XML.
There may be other, faster libs that I don't know about, but I couldn't find them. I tried HaXml first (from which HXT is apparently derived), but the parser choked on my document and the author didn't come forward with a fix when I reported the problem (by email, the project isn't on Github). There is one called HXML, but I think it's dead. The TagSoup library might have worked, but I don't think so. It's not easy jumping into a new language and then coming up against library issues that prevent you from finishing your first project.
The "String problem" is definitely one of the most unfortunate parts of Haskell. Using a linked list of chars for a string is just laughable from a performance and resources standpoint. The good news is that the problem should be solved now: we have Data.Text for unicode strings, and Data.ByteString for binary/ASCII/UTF-8 strings. Both are very efficient and implement a robust API for common string operations. The bad news is that there are still far too many libraries that use the old crummy data type for strings, including much of the Prelude. And, I guess in the interest of simplicity, many beginner tutorials tend to use String as well. This is quite unfortunate, but it does seem to be changing: Aeson uses Data.Text, the ClassyPrelude ditches String almost entirely (keeping it for Show only), and in general most modern libraries avoid String.
The project wasn't that recent, so I don't quite remember, but I would have wanted something like dom-selector, and that one didn't come up in my searches for solutions.
It's interesting that XML libs have to invent operators and obnoxious syntax (like HXT's arrow usage, or coincidentally the fact that HXT's parser uses the IO type, which is just crazy talk). dom-selector seems to have the same problem. I prefer readable functions, not DSLs where my code suddenly descends into this magic bizarro-world of operator soup for a moment.
Lenses would make tree-based extraction easier, I think, although lenses aren't easy to understand or that easy to read. Tree traversal with lenses and zippers seems unnecessarily complicated to me.
In a scraper you just want to collect items recursively, and return empty/Nothing values for anything that fails a match: Collect every item that contains a <div class="h-sku productinfo">, map its h2 to a title and its <div class="price"> to a price, and then combine those two fields into a record. It's something that should result in eminently readable code, not just because it's a conceptually trivial task, but also because someday you need to go back to the code and remember how it works.
> I prefer readable functions, not DSLs where my code suddenly descends into this magic bizarro-world of operator soup for a moment.
Bizarro world of operator soup? I don't really follow you. That dom selector code just compiles down into functions itself. I don't see how anything could be any clearer than a css selector for selecting an html element.
The old situation with list processing in which the decision to fold from left or from right can make a big performance difference might be the fundamental example of this kind of problem. It is enough to make me think twice about the wisdom of defining lists recursively. It definitely doesn't feel "declarative", which attribute is surely more important than elegant simplicity of implementation.
>Haskell performance is very good when written by people who know how the compiler works
I know nothing about how the compiler works, and my haskell code still easily outperforms my clojure code. The only optimizations I do are the same as anywhere else: profile and look at functions taking up too much time.
>and know the bytecode they want generated.
Bytecode is not involved. Machine code is, but I don't even know ASM to know what I want generated or if it is being generated that way.
>When you need them, there is no real substitute for unboxed mutable arrays, something Haskell does NOT make easy.
This is simply nonsense. Unboxed mutable vectors are trivial in haskell: https://hackage.haskell.org/package/vector-0.10.11.0/docs/Da... No, there is no substitute for using the right data types. Why do you think haskell or haskellers suggest using the wrong data types?
My goal is "as fast as C (TM)". Clojure is not known for being a speed demon.
I didn't say you couldn't do arrays with Haskell, I said Haskell doesn't make it easy. Here are the actual array docs, BTW: http://www.haskell.org/haskellwiki/Arrays
Enjoy using C then. You suggested that haskell was bad because it was not fast enough. If "not as fast as C" is not fast enough, then virtually every language is not just bad, but much worse than haskell.
>I said Haskell doesn't make it easy
And I showed you that it is in fact trivially easy.
>Here are the actual array docs, BTW
That is a random, user-edited wiki page. I linked to the actual docs.
If "not as fast as C" is not fast enough, then virtually every language is not just bad, but much worse than haskell.
I agree. The only languages I've used that are remotely competitive for my purposes are static JVM languages (Java and Scala), Ocaml, and Julia for array ops. Haskell comes closer than many others, but just isn't there yet.
The docs you linked to are a 3'rd party package marked "experimental". I'll also suggest that you are glossing over most of the difficulties in using them. It's trivially easy to call `unsafeRead`. It's not so easy to wrap your operations in the appropriate monad, apply all the necessary strictness annotations to avoid thunks, and properly weave this monad with all the others you've got floating around.
(That last bit is fairly important if you plan to write methods like `objectiveGradient dataPoint workArray`.)
Except scala and ocaml are both slower than haskell.
>The docs you linked to are a 3'rd party package marked "experimental".
No it is not. What is the point of just outright lying?
>I'll also suggest that you are glossing over most of the difficulties in using them
I'll suggest that if you want people to believe your claim, then you should back it up. Show me the difficulty. Because my week 1 students have no trouble with it at all.
>It's not so easy to wrap your operations in the appropriate monad
You are literally saying "it is not easy to write code". That is like saying "printf" is hard in C because you have to write code. It makes absolutely no sense. Have you actually ever tried learning haskell much less using it?
>apply all the necessary strictness annotations to avoid thunks
All one of them? Which goes in the exact same place it always does? And which is not necessary at all?
>and properly weave this monad with all the others you've got floating around.
I don't know why you are responding so angrily. The page you linked to explicitly says "Stability experimental" in the top right corner.
I also don't know why you are behaving as if I dislike Haskell. I enjoy Haskell a lot, I just find getting very good performance to be difficult. You can browse my comment history to see a generally favorable opinion towards Haskell if you don't believe me.
I also gave you a concrete example of a reasonable and necessary task I found difficult: specifically, numerical functions which need to mutate existing arrays rather than allocating new ones, e.g. gradient descent. Every time I've attempted to implement such things in Haskell, it takes me quite a bit of work to get the same performance that Scala/Java/Julia/C gives me out of the box (or Python after using Numba).
> "Stability experimental" in the top right corner.
This is a bit of a strange convention in the Haskell world. Libraries tend to be marked "experimental" even when they are completely stable and the correct choice for production use. Note that Data.Text[1] is also marked "experimental", and it is perfectly stable and the correct choice for Unicode in Haskell.
> 3'rd party package
Data.Vector is 3rd party in the sense that it is not part of the GHC base package, but so what? It is now considered the correct library for using arrays in Haskell.
I'm not. Given that you can't tell someone's emotional state via text, it doesn't make much sense to assume an emotional state for someone else simply because it will make you feel better.
>The page you linked to explicitly says "Stability experimental"
So does every library. It is the default state of a seldom used feature that still hasn't been removed.
>I also don't know why you are behaving as if I dislike Haskell
I am responding to what you say. You said using a mutable unboxed array is hard. That is not a simple misunderstanding, that is either a complete lack of having ever tried to learn haskell, or a deliberate lie. There's literally no other options. I teach people haskell. They do not use lists for anything other than control. They have absolutely no problem using arrays.
>I also gave you a concrete example of a reasonable and necessary task I found difficult
But you didn't say what made it difficult. So a reader is left to assume you are trolling since that task is trivial.
Actually, Haskell does let you write print statements for debugging.
If we have the following function:
foo :: Int -> Int
foo x = x `div` 0
and we want to add debugging, we can do:
import Debug.Trace
foo :: Int -> Int
foo x
| trace (show x) False = undefined
| otherwise = x `div` 0
The above will print the value of x before throwing an error due to division by zero. You don't have to make foo return an IO Int or change any other aspect of your program.
> that is altar upon which you sacrifice the abilitity
I see statements like this all the time from people that either fundamentally misunderstand Haskell, and use to have the same misunderstandings myself. You really don't sacrifice anything by using it.
> the abilitity to write print statements to do debugging
I can slap a `trace` statement wherever the fuck I want inside my Haskell code for debugging. Even inside a pure function, no IO monad required. If I want to add a logger to my code, a 'Writer' monad is almost completely transparent, or I can cheat and use unsafePerformIO.
> and lose the ability to reason about order of execution.
If I'm writing pure code, then order of execution is irrelevant. It simply does not matter. If I'm writing impure code, then I encode order of execution by writing imperative (looking) code using do-notation, and it looks and works just like it would in any imperative language.
> But does it really result in more performant code
Haskell has really surprised me with its performance. I've only really been using it for a short time, having been on the Java bandwagon for a long time.
One example I had recently involved loading some data from disk, doing some transforms, and spitting out a summary. For shits and giggles, we wrote a few different implementations to compare.
Haskell won, even beating the reference 'C' implementation that we thought would have been the benchmark with which to measure everything else, and the Java version we thought we'd be using in production.
Turns out that laziness, immutability, and referential transparency really helped this particular case.
- Laziness meant that a naively written algorithm was able to stream the data from disk and process it concurrently without blocking. Other implementations had separate buffer and process steps (Even if hidden behind BufferedInputStream) that blocked the CPU while loading the next batch of data
- Immutability meant that the Haskell version could extract sections of the buffer for processing just by returning a new ByteString pointer. Other versions needed to copy the entire section into a new buffer, wasting CPU cycles, memory bandwidth, and cache locality.
- Referential transparency meant that we could trivially run this over multiple cores without additional work.
Naturally, a hand-crafted C version would almost certainly be faster than this - but it would have required a lot more effort and a more complex algorithm to do the same thing. (Explicit multi-threading, a non-standard string library, and a lot of juggling to keep the CPU fed with just the right amount of buffer).
On a per-effort basis, Haskell (From my minimal experience) seems to be one of the more performant languages I've ever used. (That is to say, for a given amount of time and effort, Haskell seems to punch well above its weight. At least for the few things I've used it for so far).
I'm still of the impression that well written C (or Java) will thoroughly trounce Haskell overall, but GHC will really surprise you sometimes.
I haven't used OCaml much - but my understanding is that the GIL makes it quite difficult to write performant multi-threaded code, something that Haskell makes almost effortless.
No specific reason really. I didn't think about it at the time, that's just how I typed it.
Probably because C is a single letter, and thus potentially needs some differentiation from the surrounding sentence, whereas Haskell is an actual word. But no idea really.
What's "it" - Haskell, or referential transparency? Referential transparency definitely has its victims, and debugging is one of them. Debug.Trace is quite useful, and also violates referential transparency. That Haskell provides it is an admission that strict R.T. is unworkable.
> If I'm writing pure code, then order of execution is irrelevant. It simply does not matter. If I'm writing impure code, then I encode order of execution by writing imperative (looking) code using do-notation, and it looks and works just like it would in any imperative language..
Baloney! Haskell's laziness makes the order of execution highly counter-intuitive. Consider:
import Data.Time.Clock
main = do
start <- getCurrentTime
fact <- return $ product [1..50000]
end <- getCurrentTime
putStrLn $ "Computed product " ++ (show fact) ++
"in " ++ (show $ diffUTCTime end start) ++ " seconds"
This program appears to time a computation of 50000 factorial, but in fact it will always output some absurdly short time. This is because the true order of execution diverges greatly from what the program specifies in the do-notation. This has nothing to do with purity; it's a consequence of laziness.
> Turns out that laziness, immutability, and referential transparency really helped this particular case
I don't buy it. In particular, laziness is almost always a performance loss, which is why a big part of optimizing Haskell programs is defeating laziness by inserting strictness annotations.
> Laziness meant that a naively written algorithm was able to stream the data from disk and process it concurrently without blocking
This would seem to imply that Haskell will "read ahead" from a file. Haskell does not do that.
> Immutability meant that the Haskell version could extract sections of the buffer for processing just by returning a new ByteString pointer. Other versions needed to copy the entire section into a new buffer
Haskell returns a new pointer to a buffer, while other versions need to copy into a new buffer? This is nonsense.
Like laziness, immutability is almost always a performance loss. This is why ghc attempts to extract mutable values from immutable expressions, e.g. transform a recursive algorithm into an iterative algorithm that modifies an accumulator. This is also why tail recursive functions are faster than non-tail-recursive functions!
> Referential transparency meant that we could trivially run this over multiple cores without additional work
It is not especially difficult to write a referentially transparent function in C. Haskell gives you more confidence that you have done it right, but that measures correctness, not performance.
Standard C knows nothing of threads, while Haskell has some nice tools to take advantage of multiple threads. So this is definitely a point for Haskell, compared to standard C. But introduce any modern threading support (like GCD, Intel's TBB, etc.), and then the comparison would have been more even.
When it comes to parallelization, it's all about tuning. Haskell gets you part of the way there, but you need more control to achieve the maximum performance that your hardware is capable of. In that sense, Haskell is something like Matlab: a powerful prototyping tool, but you'll run into its limits.
"This is because the true order of execution diverges greatly from what the program specifies in the do-notation. This has nothing to do with purity; it's a consequence of laziness."
Of course, that's not what the do notation specifies, but I agree that's somewhat subtle. As you say, it's a consequence of laziness. Replacing "return" with "evaluate" fixes this particular example.
In general, if you care about when some particular thing is evaluated - and for non-IO you usually don't - an IO action that you're sequencing needs to depend upon it. That can either be because looking at the thing determines which IO action is used, or it can be added artificially by means of seq (or conceivably deepSeq, if you don't just need WHNF).
>That Haskell provides it is an admission that strict R.T. is unworkable.
Perhaps it is, but that doesn't mean it's not immensely valuable as a default. And it's worth noting that in the case of Debug.Trace, the actual program is still referentially transparent, it's just the debugging tools that break the rules, as they often do.
>Haskell's laziness makes the order of execution highly counter-intuitive.
Yes, there are some use cases where do-notion doesn't capture all the side effects (i.e. time/memory) and so a completely naive imperative perspective breaks down. But these cases are rare, and it's not that hard to learn to deal with them.
First up - I'll preface my reply below with a big disclaimer that I'm a relative notice with Haskell, so these are purely my opinions at this point in my learning curve.
> What's "it" - Haskell, or referential transparency? Referential transparency definitely has its victims, and debugging is one of them. Debug.Trace is quite useful, and also violates referential transparency. That Haskell provides it is an admission that strict R.T. is unworkable.
I'd disagree that this is any real attack on the merits of referential transparency, since Debug.Trace is not part of application code. It violates referential transparency in the same way an external debugger would. It's an out of band debugging tool that doesn't make it into production.
> Baloney! Haskell's laziness makes the order of execution highly counter-intuitive. Consider
I wouldn't say it makes order of execution highly counter-intuitive, and your above example is pretty intuitive to me. But expanding your point, time and space complexity can be very difficult to reason about - so I'll concede that's really a broader version of your point.
> Haskell returns a new pointer to a buffer, while other versions need to copy into a new buffer? This is nonsense.
C uses null-terminated strings, so it order to extract a substring it must be copied. It also has mutable strings, so standard library functions would need to copy even if the string were properly bounded.
Java using bounded strings, but still doesn't share characters. If you extract a substring, you're getting another copy in memory.
Haskell, using the default ByteString implementation, can do a 'substring' in O(1) time. This alone was probably a large part of the reason Haskell came out ahead - it wasn't computing faster, it was doing less.
Obviously in Java and C you could write logic around byte arrays directly, but this point was for a naive implementation, not a tuned version.
> This would seem to imply that Haskell will "read ahead" from a file. Haskell does not do that
It would seem counter-intuitive that the standard library would read one byte at a time. I would put money on the standard file operations buffering more data than needed - and if they didn't, the OS absolutely would.
> Like laziness, immutability is almost always a performance loss.
On immutability -
In a write-heavy algorithm, absolutely. Even Haskell provides mutable data structures for this very reason.
But in a read-heavy algorithm (Such as my example above) immutability allows us to make assumptions about the data - such as the fact that i'll never change. This means that the standard platform library can, for example, implement substring in O(1) time complexity instead of having to make a defensive copy of the relevant data (Lest something else modify it).
On Laziness -
I'm still relatively fresh to getting my head around laziness, so take this with a grain of salt. But my understanding, from what I've been told and from some personal experience:
In completely CPU bound code, laziness is likely going to be a slowdown. But laziness can be also make it easier to write code in ways that would be difficult in strict languages, which can lead to faster algorithms with the same effort. In this particular example, it was much easier to write this code using streaming non-blocking IO that it would be in C
> It is not especially difficult to write a referentially transparent function in C. Haskell gives you more confidence that you have done it right, but that measures correctness, not performance.
Except that GHC can do some clever optimizations with referential transparency that a C compiler (probably) wouldn't - such as running naively written code over multiple cores.
> When it comes to parallelization, it's all about tuning. Haskell gets you part of the way there, but you need more control to achieve the maximum performance that your hardware is capable of. In that sense, Haskell is something like Matlab: a powerful prototyping tool, but you'll run into its limits.
I completely agree. If you need bare to the metal performance, then carefully crafted C is likely to still be the king of the hill for a very long time. Haskell won't even come close.
But in day to day code, we tend to not micro-optimize everything. We tend to just write the most straight forward code and leave it at that. Haskell, from my experience so far, for the kinds of workloads I'm giving it (IO Bound crud apps, mostly) tends to provide surprisingly performant code under these conditions. I'm under no illusion that it would even come close to C if it came down to finely tuning something however.
A really great rebuttal of his points. I like Haskell, I really do - but I can never get any useful work done out of it. (Note: I am a hobbyist and not a professional programmer)
It's not a great rebuttal, it's just showing why people with imperative mindsets don't really understand Haskell still. The rebuttal rebuttal is good.
Do notation is not specifically a line-by-line imperative thing, and complaining that it isn't that doesn't make it bad. Obviously, the goal in Haskell isn't precisely to do imperative coding. It remains true that you can hack imperative code into Haskell in various ways effectively.
>sacrifice the abilitity to write print statements to do debugging
No.
>lose the ability to reason about order of execution
No.
>But does it really result in more performant code?
That is not the goal. The goal is being able to reason about the code, and write code that is correct. The fact that it performs very well is due to a high quality compiler, not purity.
> Every benchmark I've ever seen, more practical languages like Ocaml have come out on top.
Doesn't look that way from here: http://benchmarksgame.alioth.debian.org/u32/ocaml.php
How exactly is a language that is unable to handle parallelism "more practical" than one that handles it better than virtually any other language?
OCaml is more practical than ML and Haskell because it has objects, for loops, more edge cases in the language, built in mutable keyword, and extensible records.
No it is not. Ocaml's objects make it less practical, not more. That is why they are virtually completely unused. At best, for loops are irrelevant. I'd say they are closer to a negative than irrelevant though. What do you mean "more edge cases?" That the language is less safe? How is that practical? Haskell has mutable references too, with the added benefit of them being type safe. And haskell has extensible records, they are just a library like anything else: http://hackage.haskell.org/package/vinyl
Monads are arguably a library in Haskell, too... though one the standard guarantees is present, exposed by the Prelude, and relied on by a lot of code.
Then what? You made the vague statement, make it not vague.
>And OCaml has monads, they are just a library like else.
And? I did not claim ocaml lacks monads. You claimed haskell lacks extensible records. You do understand that my post was a direct reply to what you said right? Not just some random things I felt like saying for no particular reason.
It's a substantially nicer language than Javascript, and there's a lot of problem domains that are far easier solved in it.
People wouldn't use it because they don't want to learn Javascript. They'd use it because they don't want to use Javascript, on account of knowing better.
> JavaScript is the only language that let's you target every platform
Ironically, the three languages you listed - Java, C#, and Objective-C are all available on all major platforms. I could write code in any of those languages and have it run on
- Windows
- MacOS
- Linux
- Android
- iOS
Furthermore, the apps would all be native. That closest Javascript can come is to use the uncanny-valley of HTML5 without native widgets or performance.
Frankly, Given that the burden of learning a new language is tiny, I see no reason why we should try and cram Javascript into every nook and cranny. It's a fairly mediocre language at best.
tl;dr The three languages you listed are all capable of better cross platform support than Javascript.
Really? Seems like HTML/JS is the best choice for cross platform application development. These days with Chrome, Firefox and even Safari you can create really full featured applications. The new HTML5 API's are incredibly full featured. Just the other day I created an App for my daughter to helper her login to her school computers - so I decided to create a login screen and then I realized with HTML5 audio API, I can also read aloud each key press as well as an intro text so even though she can't read yet she can use the App. It's 100% JS and HTML/CSS for UI.
At my job we spend thousands of hours building our native App for Android and IOS for work and it's painful - when I can open the browser and nearly all the capabilities are in browser and the browser already cleanly abstracts the process of editing UI via HTML/CSS. I've yet to find another UI framework as powerful or quick to develop in as HTML/CSS - and the performance is looking really good in the browser. Native app performance is a myth unless maybe you're writing a video game?
> I've yet to find another UI framework as powerful or quick to develop in as HTML/CSS - and the performance is looking really good in the browser.
Really? Until a couple of months ago with the advent of flexbox, you couldn't reliably do columns that fit dynamically to the browser window (frameworks like Bootstrap simply define three different window sizes and switch between them, rather than allowing for fluid resizing) as well as you can in every other UI framework I've used, never mind vertical layout. (See all the hacks for "sticky footers" over the past decade.)
"Quick" - sure. HTML5 lets you spin up UIs fairly quickly.
"Powerful" - I'm not so sure. Generally I find native toolkits, while requiring a bit more effort up front, generally yield much nicer user experiences.
For simple apps, HTML5 works fairly well. But you quickly run into problems.
My company has been working on a cross mobile application in HTML5 using Cordova. It was quick to prototype, but as we scaled up we started hitting the limitations of the platform very quickly.
The main issue is that every single mobile device has a slightly different rendering engine. Even two different Android phones from the same manufacturer will behave slightly differently. If you're making a simple CRUD app, this probably won't affect you - but once you reach a moderate level of complexity you spend more time playing whackamole fixing bugs and testing on every possible combination of devices.
We're currently rewriting as pure native across all platforms, since it works out to be less effort in the long run.
This is on top of the obvious limitations of HTML5 - Performance is never quite there for anything other than trivial tech demos, and the lack of native widgets mean your users will have that "uncanny valley" experience where it doesn't quite feel right.
tl;dr HTML5 is ok for prototyping and simple apps, but doesn't scale well. Ditto for Javascript.
Actually, JavaScript can create native controls and calls on all of those platforms through an API layer, using a framework such as Titanium. JavaScript has not been limited to webviews for at least 5 years now.
Many Gnome (Linux) apps original language is JavaScript (GJS). iOS 7 came with a JS API. It is such a powerful language for async and UI design that it adapts well to all these platforms.
> Am I the only one agrees with the author that generics probably do more harm than good
It's really the same debate as 'Dynamic' vs 'Static' languages, since without generics your code is relying on runtime type checking in anything remotely complex, thus is effectively a dynamic language.
Personally, I'm of the opinion that people that don't like static typing only feel that way because they've only used the shitty implementations in Java or C#.
Regardless, This is a religious war that has raged for decades, and isn't likely to be answered anytime soon. The answer really comes down to "It Depends". I fall firmly in the 'static typing is good' camp, mainly because I have hard evidence to back up my opinion that it results in significantly fewer defects.
(Specifically, very clear reports from from issue tracking systems showing our defect rate in production dropping by 90% (!!!) when we switched from Groovy to Scala, with a notable increase in productivity).
Some of the developers complained, since they had to learn knew tools. But being professionals, they learned new tools and were better off for it.
Now while I'm an extremist religious zealot about proper static typing being the one true way, I'm quite mindful that, for many developers, that tasks they are working on just aren't complex enough for it to make much of a difference in practice - they are able to test all edge cases and deploy stable software to production, just with a little more runtime testing than they'd otherwise need.
Some languages - such as Go, or pre-enlightenment Java, do not implement Generics, and thus require runtime casting in many cases. In these languages, there's still a degree of compile time checking, just not as thorough as it should be. As with dynamic languages, they can work with no perceived issues for projects up to a certain size and provide a reasonable halfway point. Beyond this, you're going to hit a wall.
As to your argument that Generics do "More harm"? I'd strongly disagree. If you're unfamiliar with the gotchas generics introduce (i.e. Variance can be a mindfuck), then they can seem difficult and problematic. But like any other professional tool, once you've gotten over the learning curve you're more productive with it than without.
tl;dr If I wanted to bang a few pieces of wood together, I'd feel comfortable using a Hammer. the learning curve small, and I can connect those two pieces of wood in no time.
My Uncle is a carpenter. As a professional carpenter, he bangs pieces of wood together all day long, every day, for his entire career. As such, a nailgun is a more appropriate tool. While being more complex to use and having a steeper learning curve, he's a professional, and uses a professional tool to do a professional job. Occasionally he might want to bang some quick project together in his shed, and getting out the nailgun is overkill, so he uses a hammer for the odd thing here and there.
I'm a professional programmer. I use professional tools, even if they have a steeper learning curve and might be more complex. Occasionally I want to whip up a quick script, so will just hack it together in Python.
> our defect rate in production dropping by 90% (!!!) when we switched from Groovy to Scala, with a notable increase in productivity)
> Occasionally I want to whip up a quick script, so will just hack it together in Python
Languages like Python and Groovy were originally created to be scripting languages for quickies. Of course what starts off as a short script can easily evolve into a larger production system. Groovy's creator James Strachan based Groovy closely on Java syntax specifically to provide a seamless upgrade path from Groovy to Java when such scripts grow into something larger. He even put in runtime type tags which would become compile-time types without any syntactic changes when code was converted from Groovy to Java. Groovy was innovative beyond its peers Python and Ruby in that way, intended to be a dual dynamic language to statically-compiled Java, enabling easy conversion to the main language when required. Other languages like C# and Scala solved that issue with type inference and by adding a "dynamic" type into the main language instead.
Unfortunately after Strachan was replaced, the management policy regarding Groovy's purpose changed. All work on a spec to encourage alternative implementations was dropped, and a user-contributed plugin enabling static compilation was duplicated into the main Groovy distribution for version 2. Groovy was then pitched as an alternative to Java, competing head on. They don't mention in their marketing, however, that a mere one person wrote Groovy's static code compared to the hundreds who contributed to Java's, and or even to Scala's. Therefore adopting Groovy for static compilation is very risky, a possible cause for your huge defect rates in production.
There is a lot of interest in research on the benefits of static vs dynamic typing. But unfortunately there is not a lot of hard data. There were some experiments, e.g.
http://dl.acm.org/citation.cfm?id=2047861
which seem to support the claim that dynamic typing is better for rapid prototyping. So if you have data that correlate typing disciplines with bug rates, it would be hugely valuable to share it.
On the other hand, there's the 1994 Navy sponsored study which had as an (informal) conclusion that Haskell was better at rapid prototyping when compared to other languages of the time.
The experiment mostly compared Haskell with imperative languages such as C++ and Ada, but there was also at least one Lisp variant. There were several informal aspects to the study, not the least of which being that there were no clearly defined requirements for the system to be implemented (so it was up to each participant to define the scope), but the conclusion is very interesting nonetheless:
The Haskell took less time to develop and also resulted in less lines of code than the alternatives, and it produced a runnable prototype that some of the reviewers had a hard time believing wasn't a mockup. Many of the alternatives didn't even end up with a working system. It should also be noted that the Haskell participants decided to expand the scope of the experiment; i.e. they didn't "win" because they implemented a heavily simplified solution, but in fact added extra requirements to their system and still finished earlier!
Even though Obj-C wasn't included in the study, there were similar enough C-like languages in it, so my bet is that Haskell would have won against it as a rapid prototyping language as well.
They used Java as the static language, which is relatively cumbersome as far as statically typed languages go, and is also known for its verbosity. "Rapid prototyping" and "java" does not really mesh to begin with, or at least that's what the common wisdom tends to say.
It seems that they need another survey/study to check the OPs claim (namely do some research on productivity in languages with stronger static type systems than java).
I do agree with your points, but i think you've over generalised them into a degree that discussion is not only unnecessary but also childish.
Let's restore the context back to Swift in iOS programming to match its targeting market, shall we? Could you come up with 1 use case which:
.. generic is really useful.
.. the problem hasn't solved by well recognised 3rd party lib/framework.(by "well recognized" i mean github starred or forked more than 500.)
.. should be used in 10% of the top 100 apps on Appstore.
> Could you come up with 1 use case which: .. generic is really useful.
Errm arrays/dictionaries that return typed objects so that you don't have to cast everything from id either with an ugly explicit isKindOf test or just hoping for the best?
There is nothing that strong typing fixes that can't be fixed by just coding it right but when has everything been coded absolutely right without bugs? And even if it is coded right to start with when you make a change if you forget one rare case where it is used during your refactor you can end up with a crash in the field or with Swift a compile time error that you fix in a second.
Container types are the most obvious. Lists, Arrays, Dictionaries, Vectors, Stacks. You'd be hard pressed to find code that doesn't use a container type of some kind.
Incidentally the problem is much smaller than in a statically typed language with containers.
Compare the difference between a Java list prior to generics and afterwards. Using it in Java was an orgy of object casts. This is not the case for dynamically typed languages.
In fact, the common approaches to containers valid in a statically typed language is largely wrong in a dynamic language.
"Shitty" may not be the right word, since C#'s type system works fairly well for its problem domain. The point is that the type systems found in C# and Java are conservative and in many cases overly burdensome for the type-safety benefits you get.
As a result, when compared against purely dynamic languages, the advantages of static typing in Java and C# are not so clear cut (Hence why so many developers just use dynamic languages).
If your only exposure to static typing is in Java or C#, then you're really haven't seen a good type system at work.
The end result in this case is something that just looks and feels completely imperative.
but if you were to try and call, say, the "rmdir" function inside another function that didn't have an IO return type, you'd get a compile error. (More specifically, you could technically call the function, you just couldn't return the "IO" value as a result, so it couldn't perform any actions).