The point is that NPEs are a solved problem, just not in Java and Clojure. I am better able to stay focused on the real problems, as I never have to root out and fix bugs like this.
I can do all kinds of things to prevent bugs showing up in production. I once worked on a mission-critical piece of software in which the entire team was brought in to do a page-by-page code review for all newly written code.
Of course, the cost outweighs the benefit in this case for most industry software that can afford a few bugs at the extreme expense of (this kind of) code review.
Now the question is: are the static verification methods you're using (null-checking, type-checking, range-checking, what-have-you) worth the effort it takes to apply them over time?
And if you say yes, do you believe they are so worth it that they should be applied across your codebase always, without discretion? Most static programming languages force you (or make it hard for you not) to do this verification across the board.
> And if you say yes, do you believe they are so worth it that they should be applied across your codebase always, without discretion? Most static programming languages force you (or make it hard for you not) to do this verification across the board.
Yes, actually, I don't find these things to be a particularly high burden, and the benefit, in my experience, easily outweighs it.
When you're writing code, you have to think about what kinds of things will be passed to a given function, whether the compiler checks it or not. So I find having the compiler check mundane things like this lowers my cognitive load, because I don't have to worry so much about what to do if I get a null (fail, handle it in some way, etc.).
I will agree, though, that the style of programming with maps so prevalent in Clojure makes a lot of sense in some programs. However, even then, there is a typing discipline that fits (row types).
> Now the question is: are the static verification methods you're using (null-checking, type-checking, range-checking, what-have-you) worth the effort it takes to apply them over time?
Yes!
> And if you say yes, do you believe they are so worth it that they should be applied across your codebase always, without discretion?
Oh god yes! Simple Hindley-Milner typing is so cheap to use it's almost free!
> Most static programming languages force you (or make it hard
for you not) to do this verification across the board.
Well, not really. In Haskell, for better or for worse, you can still use `fromJust` even if it is considered rather naughty.
No I haven't. And, depending on how long life turns out to be, maybe I'll get to.
But.
I have built plenty of large software systems. And the source of cost in those systems is always -- by at least a factor of a 100 -- was coupling and imprecise semantics.
These two problems are a function of the experience and training (and value system) of the programmer building them, and programming languages can do very little to save a system from these plights. (It might be that a static programming language can help here inasmuch as it slows down the programmer from producing too much code, but I realize that's arguable and also that I'm making a wicked joke.)
Still.
All computers ask is for semantic precision and you don't need a static type verification to get precision. So clearly static type verification is unnecessary for producing programs that work. And clearly statically-typed everywhere PLs are asking the programmer to do extra work. That's prima facie true. So the burden is really on the MLer/Haskeller to prove that that extra work is giving overall delivery throughput to the programming team. Maybe it is, maybe it isn't. But I'm waiting for the clearly thought out justification. Haven't heard it yet.
> And clearly statically-typed everywhere PLs are asking the programmer to do extra work.
I'm not convinced this is entirely true. The kinds of information that you encode in types is the kind of information that's useful to anyone reading the code. And if that information isn't written down somewhere, then the person reading the code (maybe you sometime later) has to reconstruct it themselves. And if it's useful to write that kind of information down, why not have the computer check that it's consistent?
> All computers ask is for semantic precision and you don't need a static type verification to get precision. So clearly static type verification is unnecessary for producing programs that work
That's a bit contrived. No matter which language you write in, at some step a type check will happen. For dynamic types its at runtime and for static types its at compile time.
> And clearly statically-typed everywhere PLs are asking the programmer to do extra work. That's prima facie true. So the burden is really on the MLer/Haskeller to prove that that extra work is giving overall delivery throughput to the programming team. Maybe it is, maybe it isn't. But I'm waiting for the clearly thought out justification. Haven't heard it yet.
Here are just a few of the real world accounts of using Haskell in production, you can check out:
> I have built plenty of large software systems. And the source of cost in those systems is always -- by at least a factor of a 100 -- was coupling and imprecise semantics.
This is a wonderfully, astonishingly, gobsmackingly interesting comment for me to read! In my experience of large systems (Haskell-style) types are exactly the thing you need to tame coupling and imprecise semantics.
To take one arbitrary but notable example, in a previous job we found several bugs in large industry XML schema^ by converting the schema to Haskell types and noticing, via type errors, that some things didn't match up.
^I forget if the bugs were in the schema itself or in the Java implementation that another team was using. I think a little bit of the former and a lot of the latter.
> I have built plenty of large software systems. And the source of cost in those systems is always -- by at least a factor of a 100 -- was coupling and imprecise semantics.
One thing I find interesting about this is that a good type system, combined with something like OCaml's module system, is that you can force decoupling. In OCaml, you can create an abstract type, which cannot be used in any way that isn't specified through the interface. You don't have to wrap the runtime value in anything, it's purely a compile time abstraction.
In my experience NPEs are the easiest bugs to root out and fix.