Class-based programming works great for GUI toolkits. In most other contexts, the suitability is variable. OO is a horrible match for compilers, for example.
The article is chasing down the wrong tree when he tries to build an Option type in C++, though. The normal OO way to handle the same class of functionality as ADT sum types is to use separate subclasses. What he's not acknowledging is that OO and functional+ADTs both only handle half the expression problem[1], in different ways. There's no absolute superiority for either model. It depends on the domain.
Less mutability is good almost everywhere though. The lack of persistent data types is a blight on almost all non-functional container libraries.
PL designer here with extensive experience writing compilers in C#, Java, and Scala. Pattern matching is nice but OO will get you most of the way and filling the matching gap is simple.
The expression problem is much safer solved using virtual classes (using traits and type members) in scala than with case matching. There are so many OO solutions to this problem it's not funny; I think I had a small argument with Wadler over this back in 2001 or 2.
As for persistent collection types, observable collections with undoable operations are just as good, if not better because they can still support mutability while persistent collections cannot.
I've written compilers in C, C++, C#, Delphi, Java, ML and Lisp; targeting x86, x64, JVM and MSIL. I was maintainer of the Delphi front end at Borland / Embarcadero. Pattern matching isn't just nice; you end up with substantially better typechecking, and doesn't need ugly patterns like visitors with their inverted callback control flow, if you want to decouple e.g. type checking or code generation from your AST classes. Visitors are implicitly closed to extension unless you create very abstract visitors (and I've gone down that path, it's not pleasant) - you lose a big chunk of the extensibility objects give you over ADTs. (I'd prefer to use multimethods, but they are rarely available.)
I think persistent collections, along with nice syntax for updating immutable objects (i.e. cloning with a subset of changed attributes) are more practical than having undo available. Immutable data types remove the burden of worrying about who's going to modify the data type from under you, so it lets you share subgraphs more freely. You don't need to worry as much about coupling, because the things you give your state to can't modify it. A possible alternative is mutable collections with snapshots or freezing, but I think persistent collections are a better approach.
I've never had to use visitors in my compiler implementations; abstract methods + overrides is good enough; I also use partial classes in C# heavily even those these don't support separate compilation. I also dislike callbacks also, but control flow in general is abstracted away by a flow analysis framework of some sort anyways (technically you don't need this for parsing, but it works well for the systems I build). On the other hand, scalac is just a giant pattern match in Typers and Namers, a style I dislike but to each his own.
I've built a live programming environment where reactive mutable collections are much more appropriate than immutable persistent collections, you can check it out here:
The problem with immutable collections in general is that they completely cannot track change deltas, which is necessary when building an incremental systems. Undo is also essential for these kinds of systems. I think persistent collections are a dead end, but its an argument I'll have to work on over the next few years.
The most complex code I've ever personally worked with involved a proliferation of abstract visitors in Java. I really don't think most people working on the code could understand how it all tied together (myself included).
I live ok without pattern matching and I build compilers/runtimes for a living; is that blub conceit?
I also have no problem with the expression problem. C# has partial classes anyways, which work even if type checking is non-modular. Of course, I would like it if C# support some form of pattern matching, but not enough to switch over to F# (whose pattern matching isn't as rich as Scala's, anyways).
The point was simply that what you wrote is how a very bad argument starts. I took care to point out I wasn't accusing you of that but you have reacted defensively anyway.
It's my fault for trying to engage an HN-er in a form of discourse other than debate.
As long as our hypothetical Blub programmer is looking down the power
continuum, he knows he's looking down. Languages less powerful than Blub
are obviously less powerful, because they're missing some feature he's used
to. But when our hypothetical Blub programmer looks in the other direction,
up the power continuum, he doesn't realize he's looking up. What he sees are
merely weird languages. He probably considers them about equivalent in power
to Blub, but with all this other hairy stuff thrown in as well. Blub is good
enough for him, because he thinks in Blub.
>The expression problem is much safer solved using virtual classes (using traits and type members) in scala than with case matching.
Why do you think this? I prefer having an open trait that's only inherited by ADTs.
I can get non-exhaustive match warnings and I still 'solve' the expression problem by matching on a superset of my partial functions (that do the matching).
Virtual classes with family polymorphism attacks the problem directly: you can add new variants as well as enhance the super class with new abstract methods: your system is abstract until all variants have implemented the abstract methods of the super class. You can then do cool things like factor all your functionality for X in one layer and Y in another layer, replicating variants in X and Y layers to specify their implementations separately.
When I did this in Scala (using traits and type parameters in a virtual class pattern), I got a lot of pushback from the FP people, who thought this problem couldn't, or more correctly SHOULDN'T, be solved using object-oriented constructs; it was one of the main things they liked to boast about in how FP was better than OOP, and I literally took that away from them. A 10,000 line pattern match was somehow preferable to a file with 20 traits that described how the new operation was performed per variant. Martin even made my pattern illegal via a new type check in the Scala compiler after I left EPFL :)
> Class-based programming works great for GUI toolkits.
That and games (and I'd wager simulations in general). The only two domains I'm familiar with which I cannot imagine without OO.
From what I've seen, even UIs and games written in languages without support for OO still use something very close to OO. GTK for instance takes it quite far with GObject.
I spent some time writing Asteroids in netwire - a functional reactive programming library in Haskell - in my free time over a few evenings. I blogged about this at http://ocharles.org.uk/blog/posts/2013-08-18-asteroids-in-ne... - and I don't think it pretends to be OO. It was a radically different style, and I left feeling fairly convinced that FRP is a fantastic model for realtime interactions
FRP is great until you need to be interactive or switch over collections, then it becomes quite ugly. It can work for small games, like the ones in Courtney's dissertation. But over that? Not until a complete physics engine can be joined with a FRP library.
FrTime is not really pure FRP as envisioned by Elliot or Hudak. But then I prefer these impure FRP systems and have designed/implemented one myself called SuperGlue [1] as part of my dissertation.
Thank you for sharing. I really enjoyed your netwire Asteroids post. I used Asteroids for teaching myself OO concepts many years ago. It's good to know that it's still a valuable teaching tool.
That and enterprise software. As productive as I am with functional programming, I still find it works better to expose object-oriented interfaces to the world at large.
Some of that's admittedly just down to the window dressing. Object-oriented abstraction mechanisms such as interface tend to have a measure of self-documentation built in, to the extent that all the members are named, right down to method parameters. There's no reason you couldn't do something similar with FP, it's just that as far as I can tell nobody's ever made a point of doing so.
A compiler is a program that consists of lots of different algorithms (operations) that operates on a pretty well-defined set of data (an AST or an IR). Changes to the data representation are rare compared to changes to the operations on that data.
What OO does in a compiler is totally obscure the control flow of each algorithm by spreading the logic out over dozens of classes. You can look at an AST node and see at a glance how it participates in constant folding or code generation, but if you're trying to optimize the constant folding algorithm or the code generator, you've got to follow control flow across dozens of different classes.
Using visitor pattern mitigates this somewhat, because you can have ConstFolder::visitBinaryOperation, ConstFolder::visitFunctionCall, etc, all in one file, but note that visitor pattern is just a really verbose, roundabout way of writing a switch statement! If you add a new type of AST node, you have to add a visitNewAstNode call to the visitor interface, and then go update every class that implements the visitor interface. This is no easier, and a lot more verbose, than simply adding a new variant to an AST ADT then fixing up all the places where the compiler complains that your pattern match is no longer exhaustive.
Scala, use traits and type variables, the problem is really easy to solve through what I call an open class pattern. See "new age components for old fashioned java" OOPSLA 2001, note I'm using Flatt's term "extensibility problem" which is the same as Wadler's "expression problem" term.
But if you want your control flow to be in place, then a 10,000 line case match ala scalac should be right up your alley. Frankly, I'd rather divide and conquer, which is what OOP is good at.
You want some way of specifying an interface - that doesn't necessarily imply classes. Also, class-based inheritance tends to be a poor fit for interfaces; note for example that C#/Java have their own special-purpose interface mechanics, and even C++ which allows multiple inheritance tends to use implicit template-based interface implementation instead.
Discriminated unions on the other hand are - the name says it - intended for cases where you want to discriminate between multiple possible values. They're by design not intended to hide differences behind an interface; they don't address the same problem at all.
All in all, I think that neither classes nor discriminated unions solve the interface problem (nor were they ever really designed to do so) - that's a different problem.
"Class-based programming works great for GUI toolkits."
I've programmed with 3 approaches to UI: every things a subclass, instance of class, and prototype-based objects. I enjoyed programming UI on the Newton because of the prototype-based programming. It felt natural. NeXT / Apple's instance of a class works very well too.
Every time I see "every things a subclass", I wince and know its going to be a pain in the butt.
That reminds me of a little quote by Aaron Hillegass in his book "Cocoa Programming for Mac OS X" (p. 73):
> "Once upon a time, there was a company Taligent. Taligent was created by IBM and Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached about the peak of it's mindshare, I met one of its engineers at a trade show. I asked him to create a simple application for me: A window would appear with a button, and when the button was clicked, the words 'Hello World!' would appear in a text field. The engineer created a project and started subclassing madly: subclassing the window and the button and the event handler. The he started generating code: dozens of lines to get the button and text field on to the window. After 45 minutes, I had to leave. The app still did not work. That day, I knew that the company was doomed. A couple of years later, Taligent quietly closed its doors forever. "
The article is chasing down the wrong tree when he tries to build an Option type in C++, though. The normal OO way to handle the same class of functionality as ADT sum types is to use separate subclasses. What he's not acknowledging is that OO and functional+ADTs both only handle half the expression problem[1], in different ways. There's no absolute superiority for either model. It depends on the domain.
Less mutability is good almost everywhere though. The lack of persistent data types is a blight on almost all non-functional container libraries.
http://en.wikipedia.org/wiki/Expression_problem