I'm not super familiar with OCaml but I have written a fair amount of code in F# which is supposedly based on / very similar to OCaml. In the end I had to convert it all to C# because none of the others developer at my company could grok it. I agree with you that the type inference is really neat but writing F# (and probably OCaml too) consistently in an imperative style is a bit of a pain in the back; it feels like the langauge is simply not meant to be used like that. And, as I said, functional programming is difficult and most programmers never even heard of it. (I wish I could use FP at work but jobs like that are really rare here in London.)
When writing code, sure. When reading and/or maintaining code, not so much. Especially when it stretches across function or method boundaries. Languages like C++ and C# have the right happy medium in my opinion: `auto`/`var` keywords for type-inferenced local variables, but explicit parameters and return types for all methods.
I think of type systems much more as a static analysis mechanism the compiler can use to identify programming mistakes rather than a concept that helps reading or maintaining code. In my opinion, in well written code the types of various symbols should be obvious due to the context, names, etc. without having to look at type names.
In theory, perhaps. But there is a lot more information in syntax than in names in my experience, such as the number of arguments to a function.
For example, suppose `f` is a function of type `int -> int -> int -> int`. In python, semantic differences are obvious between:
a = f(x, y, z)
b = functools.partial(f, x, y)
Where as in OCaml this difference is glossed over:
let a = f x y z;;
let b = f x y;;
Even though the names "f" and "x, y, z" are relatively evocative of their types, the two expressions are very different without any particular indication as to why.
That's why I think that there is a happy medium, where syntax is very helpful in determining types such that there is no need to give explicit types to "a" or "b" in these examples, but there is enough information explicitly available to declare that "f(x, y)" is an invalid function invocation, not a curried function of the type `int -> int`.
Different, not difficult. On the whole functional programming is no more difficult than imperative/OOP. It's just a big switch for programmers used to imperative programming (i.e., most programmers).
At some point you have to also consider why most programmers are used to imperative programming.
It could be you're rationalizing from a first principle ("it must be equally easy, so if it has not caught on it's because programmers are not used to it").
> why most programmers are used to imperative programming.
Probably because that's the paradigm of the most popular languages, the first paradigm everybody learns in college and university and likely the first language a person will encounter? Because most programmers are despite what they would say about themselves in public, actually stubborn language bigots? I would think about that because it puts functional programming (or logic programming (or proof-based programming (or stack-based programming))) at a disadvantage and a fair conclusion cannot be derived from uneven exposure.
Consider the actual research done in this area, classes have demonstrated that whatever the language, there is always the same distribution. Those that excel (small group), those that succeed through hard work and do good (most of the group) and those that just don't get it and fail (small group). This has been shown to be true for C, Visual Basic, Pascal, ML, LOGO, Scheme (SICP!) and even Coq!, which is a proof-assistant. Probably Prolog, too, but I haven't looked at that. You can easily google this, just search for things like "lecture/semester findings" and "student reactions/performance" and insert your language of interest.
>Probably because that's the paradigm of the most popular languages, the first paradigm everybody learns in college and university and likely the first language a person will encounter?
That is just a rephrasing of "they are used to imperative programming". The question is why.
>Consider the actual research done in this area, classes have demonstrated that whatever the language, there is always the same distribution. Those that excel (small group), those that succeed through hard work and do good (most of the group) and those that just don't get it and fail (small group). This has been shown to be true for C, Visual Basic, Pascal, ML, LOGO, Scheme (SICP!) and even Coq!, which is a proof-assistant. Probably Prolog, too, but I haven't looked at that. You can easily google this, just search for things like "lecture/semester findings" and "student reactions/performance" and insert your language of interest.
How does that work in real life though -- when pragmatic issues get into play that are absent from "implement this algorithm in whatever language".
Oh surely that's because it has too much static typing to be practical, it's not like there is much thought put in the argument or any desire to pursue the argument to its logical conclusion in these things.
I kinda feel strange when people tell me Python is bad because it does not have static typing but when I tell them about OCaml or whatever else I'm fine with, they try to find another excuse why Java is good.