> I think people tend to underestimate the amount of time they spend on problems...
Again, the implication is that people who use languages with looser type-systems than Haskell spend lots of time dealing with the problems that you mention. In my experience, that is not the case. You can claim that I'm underestimating the impact of such bugs if you'd like.
> When you implement a red-black-tree, do you not spend any time testing your invariants?
Testing is, as far as I can tell, the reason that these problems don't come up. In the process of solving all of those problems I listed above, you tend to write a bunch of tests that exercise the same code that is run in production.
> It's weird that you bring Twitter as an example, as they canned a dynamic language solution for a static language that is in many ways very similar to Haskell.
I assure you that this transition is far less complete than you might think, and even when complete, will have more Java than Scala. Note that both of these languages allow shared mutable data, both allow null pointers.
> They all fail to scale well, both performance-wise and maintenance-wise. Statically-typed systems scale far better along both of these axis.
Oh, I agree. Go is statically-typed. However, you're advocating going even further along the spectrum, and I'm saying that going further brings diminishing returns, and starts costing you in terms of available engineers, and your productivity in writing code. I think Go occupies a good point along this spectrum, where I can write robust code without arguing with a compiler.
They guarantee the correctness of the invariants of the Red Black Tree, and they easily replace hundreds of lines of test code which give no guarantee.
We might still need to write tests, but a lot fewer of them. Also, those we write will give us far more "bang for buck" because we can use QuickCheck property testing.
> Note that both of these languages allow shared mutable data, both allow null pointers.
Scala shuns null - and only has it for Java interop. All Scala developers I've discussed this with program as if null did not exist, and never use it to signify lack of a value.
> I think Go occupies a good point along this spectrum, where I can write robust code without arguing with a compiler.
When "arguing with a compiler", you're really being faced with bugs now rather than later, when the code is no longer in your head - or worse, in production. If the type checker rejects your program, it is almost certainly broken, and it is better to "argue with a compiler" than to just compile and get a runtime error later.
Availability of engineers is a good point, though as a Haskeller, I know both companies seeking Haskell employees and Haskellers seeking employment (preferably in Haskell).
Consider the flip-side of the engineers availability is the "Python paradox".
> Testing is a cost. It is more code to write, more code to maintain.
I'm not saying you write tests to protect against NPEs. I'm saying that you write tests to ensure correctness of your code, and as a side-effect NPEs are flushed out of your code. This is my theory explaining why NPEs are not a timesink for me.
> They guarantee the correctness of the invariants of the Red Black Tree, and they easily replace hundreds of lines of test code which give no guarantee.
Code with mathematical invariants seem like such a niche area, though. The average type of test that I write is "ensure your service calls service X first checking for values in a cache; ensure that it can handle cache unavailability, service X unavailability, cache timeout, service X timeout". Maybe you could figure out a way to encode that in a typesystem, but I'd wager that it wouldn't be as readable as the equivalent written as a test.
> Scala shuns null - and only has it for Java interop. All Scala developers I've discussed this with program as if null did not exist, and never use it to signify lack of a value.
That's exactly right. Languages with nulls, and mutable shared state are perfectly reasonable to use if programmers do the right thing by convention.
> When "arguing with a compiler"...
I misspoke. I was thinking of the learning process, not the process of writing code.
Although my current focus is backend systems, I have worked across the spectrum in the past. I've worked (not including languages I dabble at home) in JS (for browser UIs), Java (for Android UIs and servers), Obj-C (for iOS UIs), Python (for servers and scripts), Scala (for servers), Ruby (for servers), and C++ (for servers). These languages represent a wide spectrum on the dynamic to static spectrum. I don't find myself writing much safer code when I go from JS to C++. The same for a shift from Ruby to Scala. The same for Python to Java. These shifts are relatively large on the type-system spectrum, as they go from dynamically typed languages to statically typed ones. You claim that a significantly smaller shift, removing nullability from a language would be a big deal for reliability. That doesn't seem likely to me.
> I'm not saying you write tests to protect against NPEs. I'm saying that you write tests to ensure correctness of your code, and as a side-effect NPEs are flushed out of your code. This is my theory explaining why NPEs are not a timesink for me.
You never know when you have enough coverage to rule out NPEs or any other bug. And to get confidence about lack of NPEs you want to have coverage of all lines involving dereferences, which means you need near 100% test coverage to have a reasonable level of confidence. With Haskell, I can be reasonably confident about my code with very little tests.
> but I'd wager that it wouldn't be as readable as the equivalent written as a test.
Generally types are far more concise and guarantee more than tests. I find 5 lines of types more readable than dozens or hundreds of lines of tests.
> That's exactly right. Languages with nulls, and mutable shared state are perfectly reasonable to use if programmers do the right thing by convention
I think Scala users will generally disagree with you. They'd prefer it if null was ruled out in the language itself. That said, Go convention is to use nulls, not shun them.
> I don't find myself writing much safer code when I go from JS to C++. ... Ruby to Scala ...
Your code is much safer simply by construction, so I am not sure what you mean here.
> You claim that a significantly smaller shift, removing nullability from a language would be a big deal for reliability. That doesn't seem likely to me
Hitting type errors at runtime, null dereference crashes in Java and "NoneType has no attribute `...`" in Python is pretty common IME.
I do think non-nullability aids reliability, but that having sum types, proper pattern matching and parametric polymorphism aids it even more. And Go lacks all of these.
> You never know when you have enough coverage to rule out NPEs or any other bug. And to get confidence about lack of NPEs you want to have coverage of all lines involving dereferences, which means you need near 100% test coverage to have a reasonable level of confidence.
This is not true. In the example I spoke about above, if I take a CacheClient, and a ServiceXClient when my type is being constructed, assign them to local fields and then never modify that field again, then I don't need to exercise every dereference of these fields, just one. And again, I don't test that my code handles NPEs, I test that my code does what it is supposed to, and in the process of doing that, NPEs get flushed out.
> Generally types are far more concise and guarantee more than tests. I find 5 lines of types more readable than dozens or hundreds of lines of tests.
I think you are viewing this through red-black-tree colored glasses. Specifically, you believe that a lot of code has mathematical constraints the way that example did. To me, this is an extremely remote possibility. I think if you tried to encode even the smallest real-world example of this, say a service implementing a URL shortener, you would run into a wall.
> Your code is much safer simply by construction, so I am not sure what you mean here.
I should have said more reliable.
> I do think non-nullability aids reliability, but that having sum types, proper pattern matching and parametric polymorphism aids it even more. And Go lacks all of these.
Truly, it baffles me that people still harp on the reliability aspect. It is quite likely that every piece of software you use day-to-day is written in a language with nullability, without pattern matching, and no sum types. Most of that software probably doesn't even have memory-safety (gasp!). Probably every website you visit is in the same sorry state. I'm sorry, but your arguments would be far more convincing if the world written in these languages were a buggy, constantly crashing hell. It's not.
I guess to progress from here we'd need to laboriously compare actual example pieces of code. For example a URL shortener is going to be easier to write safely in Haskell, where I am guaranteed by the type system not to have 404 errors in routes I advertise, or XSS attacks.
Also, in my experience, computer software is buggy, unreliable, crashing, and generally terrible. I think people who view software differently have simply grown accustomed to the terribleness that they can't see it anymore.
Also, reliability is interchangable with development speed. That is, you can trade one for the other. So if you start with a higher point, you can trade more for speed and still be reliable. In unreliable language, typically reliability is achieved by spending more time maintaining test code, doing QA, etc. In a reliable language more resources can be spent doing quicker development, and less on testing and QA.
When you see a reliable project implemented using unreliable technology, you know it's going to scale poorly and require a lot of testing.
Again, the implication is that people who use languages with looser type-systems than Haskell spend lots of time dealing with the problems that you mention. In my experience, that is not the case. You can claim that I'm underestimating the impact of such bugs if you'd like.
> When you implement a red-black-tree, do you not spend any time testing your invariants?
Testing is, as far as I can tell, the reason that these problems don't come up. In the process of solving all of those problems I listed above, you tend to write a bunch of tests that exercise the same code that is run in production.
> It's weird that you bring Twitter as an example, as they canned a dynamic language solution for a static language that is in many ways very similar to Haskell.
I assure you that this transition is far less complete than you might think, and even when complete, will have more Java than Scala. Note that both of these languages allow shared mutable data, both allow null pointers.
> They all fail to scale well, both performance-wise and maintenance-wise. Statically-typed systems scale far better along both of these axis.
Oh, I agree. Go is statically-typed. However, you're advocating going even further along the spectrum, and I'm saying that going further brings diminishing returns, and starts costing you in terms of available engineers, and your productivity in writing code. I think Go occupies a good point along this spectrum, where I can write robust code without arguing with a compiler.