The point is that you use a single function for any type. So in Python you'd have to do int() or float() or customType()... In Haskell, all of these would be just `read'. The type system can figure out which instance to use for you, without having to specify it. Moreover, it's also trivial to write a function that works on any readable type, something hard (although not entirely impossible) to do in Python.
This makes it much easier to use: whenever you want to get any type from a string, you just read it. This is just like being able to print values of any type, except for parsing.
This can also be used with constants rather than functions. So maxBound is the maximum value for any bounded type. In Python, the closest you can get to that is something like float.maxBound. (Except, apparently, it's actually sys.float_info.max.)
As I mentioned, this also lets you define new numeric types that can still use the same literals. For my most recent project, I needed 18-bit words. I could do this and still write expressions like `x + 1` using the Word18 type. Moreover, it would be very easy to make my code generic over the exact type of number used--this would make it possible to use numbers of different sizes or even something more exotic like random variables. (It happens to be tricky because some of the semantics I was working with rely on having exactly 18 bits, but that's an issue with the domain and not with Haskell.)
In another language, I would either have to use the normal int type and make sure to always keep track of the overflow myself or I would have to wrap every literal in a function that turned into an 18-bit word.
So the special quality is being able to dispatch on the return type of an expression rather than on the types of the arguments. I think this is very special indeed and extremely useful. I hope this clarifies everything.
That does make more sense, and I appreciate the more thorough explanation. It seems a bit ironic though, that Python makes you more specific and certain about the output type than Haskell!
How so? Inferring which implementation of "read" to use based on the types is exactly the same spirit of inferring which implementation of "length" to use. Haskell can maintain consistency in both, but dynamically typed languages must revert to explicitly choosing the implementation in the former.
Yes, it goes nicely with that spirit. But it doesn't go nicely with "have a well-specified return type for all functions that you know in advance", as "read" can return anything. It's great for languages to pick the right implementation of "length" on the fly, but the point is, they all return integers. "read"? Who knows what type you'll get back in Haskell, the "you must specify [or at least plan out the] output type" language? There, it seems to break the trend.
By that broad a definition of "a well defined polymorphic type", so does every function in every language, at least those that can say "everything's an object/function/data/etc!"
show :: Show a => a -> String
read :: Read a => String -> a
And there's also a law that connects their behavior:
read . show = id
show . read = id
There's nothing special about the position of the polymorphic value, it can be the argument or result. In either case, it is fully statically typed, and you never need to "downcast" it to actually use it. There is complete type safety.
Note that this is based on a correction of an OO languages' mistake: it separates the passing of the function table parameter from passing the values. This allows passing a function table for any position in the type, and not just for the first-argument-type.
Having something like:
Object read(String x) { .. }
Is entirely different for two reasons:
* The implementation isn't chosen by the type
* You will need to eventually convert the "Object" type to the specific type you need.
In Haskell, when you write:
read :: Read a => String -> a
It is actually a shorthand form for:
read :: forall a. Read a => String -> a
The "forall" is called "universal quantification" and here it is like an implicit parameter "a" (think, template <a> parameter). Caller gets to pass in/choose any "a" they want, and as long as there is a valid Read instance for that type, it will work.
However, in you could also (in pseudo-syntax):
read :: exists a. Read a => String -> a
This is called an "existential quantification", and it is not like an extra parameter "a", but like a tuple: (a, Read a => String -> a). It means: "There exists a type 'a' such that this will be the result type of read". i.e: The caller does not get to choose which type "a" it is, but instead the caller gets a value of an arbitrary, unknown type that happens to have a Read instance.
The only way to make any use of a value whose type is existentially quantified like that is to unsafely "cast" it to the type you want, and hope this is the correct type actually returned by "read" here.
This is of course nonsense in Haskell, and nobody would ever do that. However, it is very typical OO/Java code.
Whenever a Java function returns an Object, it is basically returning an existentially quantified variable, and the caller needs to correctly "cast it down" to the correct type.
Instead of repeating the same thing over again with more words, perhaps you can explain why it's "in the spirit" of Haskell not to know the return type (except with the vagueness of "Object") of a function in this one case.
There's a difference between a type like Object and a polymorphic type. This is easy to see with generics in a Java-like pseudocode:
public foo(Object o) { ... }
is very different from
public foo<A>(A o) { ... }
In Haskell, you always use the second style of function--there is no sub-typing, so there is no real equivalent to the Object type in Java.
We can imagine something similar for read. If we didn't know anything about the type, it would look something like this:
public Object read(String s) { ... }
instead, it's actually something like this:
public A read<A>(String s) { ... }
So whenever you use read, you would be specifying the type it returns. I imagine it would look something like this:
read<Integer>("10") + read<Integer>("11")
This is exactly how read works in Haskell. The important difference, however, is that the type system can infer what type read is supposed to be. So the above code snippet would look like this:
read "10" + read "11"
If you made the types explicity, it would look like this:
(read "10" :: Integer) + (read "11" :: Integer)
So you always know what type read has when you use it. But what is the type of read itself? The Javaish version looked something like A read<A>(String str). The important part is the generic A: it's a polymorphic type variable. In Haskell, the type is similarly polymorphic: String -> a.
Of course, the type isn't quite this general: you can only read things that you have a parser for. In the Java-like language, it would probably look roughly like:
public A read<A extends Read>(String str) { ... }
In Haskell, we do not have any concept of "extending" a type: there is no sub-typing of any sort. Instead, we have typeclasses which serve the same role, so the type ultimately looks like this: read :: Read a => String -> a.
Hopefully this clarifies how you know what type read has. Really, it's no different from any other class like Show. There is a very clear parallel between show and read:
read :: Read a => String -> a
show :: Show a => a -> String
Being able to take advantage of this sort of symmetry in the language is extremely useful. My favorite example is with numeric literals, which are polymorphic:
1 :: Num n => n
In my previous Java pseudocode, this would look something like:
1<A>
and would be used like:
1<Integer> + 1<Integer>
2<Double> * 3<Double>
Of course, this is hideous, which is why type inference is so important.
This makes it much easier to use: whenever you want to get any type from a string, you just read it. This is just like being able to print values of any type, except for parsing.
This can also be used with constants rather than functions. So maxBound is the maximum value for any bounded type. In Python, the closest you can get to that is something like float.maxBound. (Except, apparently, it's actually sys.float_info.max.)
As I mentioned, this also lets you define new numeric types that can still use the same literals. For my most recent project, I needed 18-bit words. I could do this and still write expressions like `x + 1` using the Word18 type. Moreover, it would be very easy to make my code generic over the exact type of number used--this would make it possible to use numbers of different sizes or even something more exotic like random variables. (It happens to be tricky because some of the semantics I was working with rely on having exactly 18 bits, but that's an issue with the domain and not with Haskell.)
In another language, I would either have to use the normal int type and make sure to always keep track of the overflow myself or I would have to wrap every literal in a function that turned into an 18-bit word.
So the special quality is being able to dispatch on the return type of an expression rather than on the types of the arguments. I think this is very special indeed and extremely useful. I hope this clarifies everything.