I find it odd that while the article is titled about Java, all of the code examples are in Scala.
The statement "function composition is a degenerate case of the Decorator pattern" in the blog is bizarre to me. OO languages like Java have to resort to the Decorator pattern to handle their lack of composibility; that seems far more degenerate than a language that supports this kind of composition without hacks.
Agreed. Most design patterns are workarounds around missing features in a language. You can go around and pretend the reverse is true all you want, but it doesn't make it so. Also, looks the author never heard about decorators in Python, which let you do the same thing if you want, just with much less boilerplate.
The article reads like it was written by somebody with experience in Java and Scala, but not much more.
So we set about to re-write Querulous using my favorite modularity techniques: Dependency Injection, Factories, and Decorators. In other words, everything you hate about Java.
Hey there Mr. Strawman.
I write mostly Ruby, and I use dependency injection, factories, and decorators liberally. There's nothing in Ruby that makes these any harder than they are in Java (in fact, I'd argue they're easier.)
Two years ago, when this piece was written, the author had more of a point: some people rejected anything that reminded them of Java out of hand, including design patterns.
These days, I'd argue that interest in classic OO design ideas is experiencing a massive resurgence in the Ruby world.
RE: "classic OO design ideas experiencing a massive resurgence"
I don't know about that. I see a whole lot of functional paradigm chatter these days. That wasn't true 5 years ago as I recall.
Then again, I suppose it depends on what color your glasses are: If you're a classic OO devotee you see most of the techniques being put forth as classic OO, and if you know a bit about FP you see it all as the stuff that FP has always been made of, the stuff that OO gurus who cut their teeth on FORTRAN neglected to mention came from FORTRAN.
Yea, people see what they are interested in. At the same time, I am really stoked by what I am learning as the Ruby Community "remembers" classic OO design.
Here are a few recent examples hinting that the Ruby community is moving on from "Fat models and skinny controllers" to the beginnings of really teaching each other about classic OO...
I use a factory for the same things you use them for: to build objects.
For example, I use factories to create test objects that are complicated to set up (using a gem named factory_girl, actually).
Also, in Ruby, every class is a factory for its own instances. If I define a class called User, I can assign the class object User to a variable:
factory = User
Then I can call
factory.new
and receive a new User instance.
As for DI, it's not wrong, nor is it unidiomatic. Plenty of folks in the community use it, and often.
A commenter elsewhere in this thread says that DI is unnecessary in Ruby because you can just reopen or redefine a class whenever you want. That's technically true, but isn't a good practice, as it can lead to some rather rapid foot shootery.
If you want to swap in different-behaving dependencies, DI is a straightforward choice. Reopening an existing class and mangling it can cause very confusing behavior, and is generally avoided by experienced Rubyists.
I thought you perhaps meant something more than class objects.
Regarding DI: Jamis Buck infamously used DI for net/ssh and posted that he regretted doing so, because DI felt out of place in Ruby. I'm not spoiling for an argument about its virtues.
Regarding "experienced Rubyists" and reopening classes, I think you should have a look at activesupport/core_ext.
I really don't understand why people think DI is wrong in Ruby. Decoupling is universally considered a good thing in every programming language. DI is a way to achieve decoupling. What about Ruby makes this unidiomatic?
You can redefine any class in Ruby, so you never need to do dependency injection. If you want to test class A independent of class B, you redefine class B in your test. In that sense , dependency injection is a feature built in to Ruby, which is why no Ruby programmer talks about it.
It's been said that every programming pattern is a programming language design flaw. This is evidence in favor of that.
For me, dependency injection is not only a matter of testing, it's a question of clarity. Show me, right in the constructor, what your class depends on.
Dependency Injection is a way to achieve Inversion of Control - meaning the consuming modules aren't concerned with the implementation or instantiation of their dependencies. Dynamic languages have less of a need for Dependency Injection than statically typed languages, but things like instance lifecycle management and policy injection are still very much relevant for good modular design.
The point is, inverting control doesn't need a three letter acronym in Ruby, it is built into the language. You get those things automatically. No javascript programmer ever talked about singletons, and no Ruby programmer ever built an IoC container, because they are not needed. It's a mistake to think these are fundamental elements to programming, they are byproducts of particular languages.
Redefining a class isn't dependency injection it's a hack to get the compiler to stop complaining. It works but it's very unclear to the person reading the program that a class your code depends on was actually redefined somewhere else.
For testing purposes placing this mock class somewhere at the start of your code is clear but only ever practical in that scenario.
I can also imagine that some programmers might not even think of the possibility that some code they depend on was overridden somewhere else.
In real maintainable code where you want to swap out behaviours it's nice if there was a special way of defining your dependencies. This is the same for all languages.
Deject offers a way so you don't have to fiddle with constructors but instead you have handles in your classes that you can override with different behavior if needed.
TL;DR: Yes, Ruby is powerful and you technically don't need IoC containers. But to make it maintainable for everyone it's a good idea to do so anyway.
Yes, Like IntegerFloatAdding. If a programming language needs a pattern for adding an integer and a float, well, you have to call this pattern something. Until such a language is made, we can forget about "IntegerFloatAdding Pattern".
Just before I saw this post on HN, I realized that the first editor I was presented to when I was being taught Java at university was emacs.
Emacs felt slow, clunky, last-century and everything which could be light, nimble and cool Emacs made feel boring and hard. I did not get to like Emacs back then.
In fact I've gone years and years and years actually never fully liking emacs, beyond it being simpler to use than vim (which I still consider insane) while readily available on most shells. It was a safe haven, although a unpleasant one.
Recently though, I've dabbled in clojure. Eclipse is really not good for clojure development. Especially not so for learning how things works under the hood. So emacs is seemingly the lingo-franco in clojure circles, and I was forced to return to my old Java-related enemy. Or at least so I thought.
Out of the box, almost nothing is optimal for clojure development. So you have to add packages (emacs supports packages? nice!), you may have to tweak init.el to do so (emacs is configurable and programmable through its own lisp-dialect? nice!), oh and you may want to add some repos and map package-install over this collection of useful pacakages (no way!).
Once you're there and getting productive, you might encounter things you don't like. Google it and find ready-made lisp-code to import into your init.el, and your problem is solved.
I guess that's not really directly related to this article, but before heading to HN now, I had the realization that emacs, despite it being the horrible tool which Java made me hate, is actually pretty awesome.
You need the right perspective to fully appreciate something. This is crucial and something Java taught me this, although in the wrong way. And just as I realize this, then I see an article about Java and perspective.
I'd say it's not warranted and it's not needed. After meddling about for a bit, I did a reset and I did it using "emacs starter kit" by technomancy (of leiningen fame):
On top of that, I've just added whatever tweaks and hacks necessary to get screen & emacs behaving properly in a FreeBSD shell, since currently my best and most powerful internet-facing host is a FreeBSD machine.
So as you can see, most of it is repo'd already :)
Language wars bore me. I've got pretty decent proficiency in C, C++, Java, Python and Javascript and I honestly like them all.
I use C to really get what happens on the metal. I use C++ to leverage C and C++ libraries for perf critical applications quickly. I use Java like C++ where it makes more sense - Android, or availability of libraries etc. I use Python to prototype and quickly express complex ideas and determine their perf characteristics. I use Javascript for frontend web dev which communicates with json back-end APIs that are language/platform independent services that scale out horizontalty.
I have 2 main rules.
If the question is to learn one important thing or another - I learn both. Then when I come across a new problem I solve it with a combination of the tools at my disposal.
Hence this is why I am currently learning Emacs, lisp, clojure, Scala and Haskell to round out my languages with maybe a little Ruby thrown in. This will take me a few years.
In the end programming is just about getting things to screen quickly and correctly. Seen from that viewpoint most languages are very similar at heart.
The Haskell vs. everything else language war really is important though. All the languages you mention are basically the same thing with slightly different syntax or a slightly different runtime environment. Haskell, on the other hand, revolutionizes the way you approach code. Or at least it did for me.
> The Haskell vs. everything else language war really is important though.
What is remarkable about Haskell is that it builds upon bleeding edge programming language research. While I think that Haskell vs. everyone else is just another language war, I find it a real pity that "mainstream" languages seem to categorically ignore everything that programming language and compiler research has come up with in the last 30 years.
I'm still waiting for someone to find a sweet spot between Haskell and the more mainstream languages. A language that has a dynamic feel to it but is a statically typed (with type inference, of course) safe language that compiles to fast and efficient native code.
Yeah I'm not so sure. If Haskell is so great why isn't it dominating? Where are the Haskell phones, apps, services etc?
Don't get me wrong - I enjoy it. But I've learned a few things throughout my life. First, the one true crowd is always wrong. Two, if it's so good - use it, and prove it.
A lot of functional stuff is a ton of gushing talk - but no product walk.
When Haskell runs on billions of devices or serves trillions of pages or produces trillions in revenue call me. Otherwise, I'll just keep shipping with my lame proven product producing languages.
Functional languages have so far been all talk and no walk.
You do have a few cool, real-world products out there in Haskell (eg, git-annex, darcs). But to be fair, there is a fairly steep learning curve.
Once upon a time, at my old company, a guy came to an interview. He was very experienced in PHP, having used it to build websites as a freelancer for something like 10 years. Did he know object-oriented programming? He'd heard about it, it was probably good for larger teams. Because he'd just been working with the same techniques all these years. And the sad truth is, your average programmer is likely to be like this guy. In which case, you want to give him a solid language, which is aimed at preventing him from shooting off his own foot, and does not feature many complicated concepts, like Java.
I'm not sure whether Haskell is really that great since I've never got around to really learn it (though I'd love to once I have enough spare time) but I think the dominating languages will be C-likes for a long time. Haskell is too far off from what average Joe Programmer is used to. Most people don't care about a new approach or don't want to invest time learning since the tools they use already work great.
I've learned a few things too, among them, that the majority usually isn't as right as you'd think. Especially in crafts like programming there are habits involved and habits are hard to change. Me, personally, I really like Haskell from what I know about it, simply because it's different.
However, I think there is no "one true paradigm". OOP goes horribly wrong for some things, just as Procedural gets horribly complicated for others and I'm pretty sure Function has a few drawbacks as well. Our real problem is the "use this, not that" mentality. Use what is appropriate for the problem you're trying to solve. That's why I like multi-paradigm languages like Python.
Python is no more "multi-paradigm" than Haskell. In Python, you can write imperative code. If you squint a bit and don't mind some awkwardness, you can write functional code. I suppose you also have objects.
In Haskell, you can write functional code. If you squint a bit, and don't mind some awkwardness, you can write imperative code. (Actually, in my experience, even the imperative code you can write in Haskell is fairly elegant.) Sure, it does not support OO-style code without crazy contortions. On the other hand, it can support other styles like non-deterministic or logic programming.
Probably the most mainstream multi-paradigm language I can think of is Scala.
As the OP demonstrates, it's quite straightforward to write Java in Scala. At my company we mostly write Haskell in Scala. And having played a bit with Akka, I'm beginning to write Erlang in Scala.
What I'd love to see is a multi-paradigm language that isn't so ugly.
I didn't mean to imply that Haskell was some "perfect language" just that you were wrong to imply that all programming languages are essentially the same. Python, Ruby, Java, and c are all essentially the same, but Haskell, Lisp, Agda are fundamentally different and learning the former groups of languages doesn't really reduce the learning curve in learning the latter.
Yeah that's not really true either. Haskell is functional.
All that means, and all that ever meant, was that the entire language worked around pushing static data into static functions and giving static returns.
I can do that in C, C++, Python, Java and Javascript. The only difference is that it is really the only way I can use Haskell - whereas the other languages are highly flexible, adaptable and actually help me ship product. Most of my APIs work essentially like Haskell - static pre-determined data in, static data out - and the back-end implementation is irrelevant for consumers.
Disclaimer: I'm a newbie in Haskell. I do, however, have experience in Java, Python, Javascript, and even C/C++.
My understanding is that you claim that all these languages can be made to offer you the same benefits as Haskell. I hope I'm not misrepresenting your position, but this is preposterous.
None of these are built around the concept of dumb, immutable data structures and stand-alone function (C/C++ can work with immutable data structures, but most data structures I have seen are mutable). Java is almost completely designed around mutable data structures with associated behaviour (objects), up to the point where you don't have standalone functions.
So, sure, you can have copy constructors everywhere, and static methods (say bye bye to dependency injection...). But it's not idiomatic code. You are fighting against the language and the ecosystem, and the only thing that will likely result is inefficient, bloated code which will get refactored out when it needs to be modified by somebody else than you.
Now, maybe you limit this to your API, but claiming that "the backend implementation is irrelevant for consumers" is a cop-out. Immutable data buys you safety. Maybe you know your code is safe, you don't have any dangling pointer issue anywhere, no concurrency issue, etc. Unfortunately, this might very well not be the case next time you or somebody else refactors your code.
There are certainly reasonable arguments to be made against Haskell (learning curve, Cabal, size of the ecosystem, difficulty of reasoning about performance...). But claiming you get the same benefits in terms of safety out of traditional imperative languages is not one of them. Not to mention higher-level functions, etc, that you certainly are not about to experiment in Java.
Nah. Just like the business side is constantly trying to figure out ways to not have to use programmers anymore, Haskell is all about trying to figure out ways to make mathematicians more useful
Actually I'd rather stick with what we have. Worse is better and all that. I find Haskell to be like reading a cross between perl and the dictations of a crazy person chanting in tongues.
The underlying principles however are damn good, but they are not enough to salvage it.
I write in all kinds of languages professionally and have been doing this sort of thing for a long time (since I was 9 and I'm now 31), so, I'm mostly over the language war thing. Yes, I prefer some languages over others. Or certain languages in certain situations. But I try not to let my opinions get out of hand. Everything is pretty much the same anyway.
Java? I can see the appeal. It's really come a long way. The only thing I can't get over about Java is the goddamned verbosity of it. It's _excruciating_ to write anything in Java.
Yes, I know. Use a modern IDE. The namespaces are designed that way for a reason. Just deal with it. It's not a product of Java itself but of the libraries. Blah blah.
It's still a painful environment to deal with and, with developer time several factors more important that just about anything else, I value my time highly and I'm not going to waste it on AbstractSingletonProxyFactoryBean bullshit.
Now, yeah. If you're a huge corporation that is bumping up against the law of large numbers, where most of your developers are mediocre and yet still have to work together, then I guess Java is OK.
I've found things like Lombok and LambdaJ to be super helpful as far as verbosity.
One thing that does frustrate me is that the compiler enforces a target bytecode level at least as high as your source version. Meaning that even the simpler syntax for initializing generics isn't allowed even though it's pure syntactic sugar and has no impact on the generated bytecode.
Does anyone have evidence (including anecdotes) that Java 7 is simple enough for mediocre developers? Every new feature in Java that I have seen, starting from and including generics, seem like the non-orthogonal hacks that make C++ so frustrating to learn.
I thought Rob Pike had a good point that, if "design patterns" had been a part of our language 40 or 50 years ago then things like calling convention would have been called "design patterns". Of course we don't talk about calling convention like that these days because languages handle it for us.
Not sure how Java is better than all the "hipster" languages as you say. It doesn't seem like you're just using some design patterns that are present in basically every language.
Clojure has futures too, but the syntax isn't as clunky as Scala or especially Java. All these synchronization primitives are in basically every other language. Maybe I'm missing something but I don't really see what is special about Java here. Has the OP ever tried any "hipster" languages, he might be pleasantly surprised if he did.
dependency injection : currying
factories : type constructor
decorators : higher order functions
there's nothing wrong with DI and the like. what's wrong is the way it's done in idiomatic java. when you do DI in idiomatic scala, there isn't much of a problem.
I no longer care. Java is overly verbose, but therefore explicit. I'm happier to do concise Scala with more stuff implicit stuff going on, or more explicit stuff in Java with lots of Spring config.
Opinionated frameworks which hide stuff from me are bad. They're good for initial productively, but you pay the price later - you have to understand it to weak it, and may find the architecture doesn't fit your problem.
On the other hand, writing a Spring config file from scratch is a horrific process, especially without the Spring eclipse plug-in.
But now that I understand web.xml's and application-context.xml/*-servlet.xml, and have written plenty of Spring config, my productivity is pretty damn good. So why should I care.
FWIW I don’t think the Clojure or Scala communities have any kind of “hate” for Java. Clojure views Java as an ally for many of the reasons you discuss. I think you’d be hard-pressed to find people in the Clojure or Scala camps that refuse to acknowledge there’s a right time to descend into Java. Likewise, I think any Ruby programmer worth his/her salt would tell you: if you’re trying to process 20,000 messages per second, don’t rely on Ruby. Moreover, JRuby, Clojure, and Scala are all quite fond of Java interop, so I’m not sure I agree with your opening statement that people who use said languages are “hipsters” who blatantly ignore the benefits of changing the level of abstraction when the situation warrants it.
AbstractFactoryFactory seems like an inevitability of object orientation. I have to wonder why we don't see it as often in other OOP languages. I'm not versed in Ruby or Python so I can't comment on those. In JavaScript there isn't the concise class or extends/implements keywords and people tend to hide how a library is implemented behind a closure that prevents you from extending it.
So I wonder if Java just gets the blame because it is the most pure object oriented language. It doesn't put up walls preventing you from doing what OOP really wants you to do. C# is the same way, by the way, you see IThisOnlyExistsToBeGeneric<T> all of the time; AbstractControllerBase is a real class in C# MVC if I remember correctly.
What walls does Ruby and Python have that prevents this? Because I see no reason why people aren't doing it.
My guess would be that most factories in Java are just very thin wrapper that do "if this return new A(); if that return new B();". This just doesn't need to be done explicitly in a dynamic language. Instead of creating a factory you could pass the class itself as an argument. In Python class is also a factory of its own instances, so the whole reason for wrapping it into another object disappears.
And if you do want some logic in the factory, you can just create one without changing the interface. Callee doesn't care when calling x() whether x is a type, factory, or something else. As long as it returns an instance, everyone is happy. (that makes factories disappear from the signatures)
I disagree; there's nothing pure about Java as an OOP language. In an OOP language, as defined by Alan Kay - who coined the term and popularized the concept - the key characteristics are that everything is an object and objects send messages to each other, while hiding their own state. Classes are not fundamental, but if they exist, they must be objects themselves.
As for JavaScript, the fact that people tend to use sealed closures is not a defining characteristic, anymore than having final classes. JavaScript has Prototypes, and they can be extended just fine.
"[java]’s a pop culture. A commercial hit record for teenagers doesn’t have to have any particular musical merits."
Could you clarify what you mean by "most pure object oriented language"? This seems disingenuous to me, since if you ask the users of a dozen different OO languages what "OO" means, you'll get a dozen different answers.
My best guess would be object-oriented in the Smalltalk/Self/etc. sense as other posters have alluded to. It's a hard distinction to understand/qualify. Jim Copelien gave a keynote at SPLASH this year about this distinction that left many people confused, but the big takeaway seemed to be that people limit the expressiveness of real object-orientation by instead focusing on classes and prototypes and whatnot. What he sees as a potential embodiment of real object-orientation (together with Trygve Reenskaug, inventor of MVC) is a paradigm they are calling DCI (Data, context, integeration).http://en.wikipedia.org/wiki/Data,_Context,_and_Interaction
Neither Smalltalk nor Self had interfaces, and never needed them. Let's be clear, interfaces give nothing to a dynamic language. If you bring them into a dynamic language, you are essentially inventing code contracts - which can be useful, but they serve a fundamentally different purpose than a Java interface.
There reason you don't see interfaces or abstract classes in Ruby or Python is because they are dynamically typed. Interfaces and abstract classes are byproducts that are required for Java and C#'s type systems to work. What use would you have for them in Ruby?
I still reason in terms on interfaces when passing arguments to a function though (eg: argument X needs to behave like a string). It will later be verified in runtime and when running the tests but it's how I reason when structuring the code.
Mentioning Servlets seems pretty out of place here. The Servlet spec isn't a web framework its an HTTP middleware. Its more similar to Rack than Rails. Both GWT and Jersey are commonly deployed on top of Servlets.
Because in the past you had to use servlets directly, or JSPs that you had to do a lot of work manually to separate business logic from the presentation layer.
Source: I worked at a Web startup circa 2000 where we did exactly that. I think people sometimes forget how far we've come.
So true ... and on the server side you have JEE6 with CDI. You can even extend DI to a GWT client in the browser by using Errai. Life is way easier than it was even 3 years ago.
" Our life is frittered away by detail. Simplify, simplify, simplify! I say, let your affairs be as two or three, and not a hundred or a thousand; instead of a million count half a dozen, and keep your accounts on your thumb-nail. "
-- Thoreau
This is pretty much why I find Java (and others at similar levels of complexity) distasteful and try to steer clear of anything related to it. It is also why Go appears to me as attractive: it has simplicity everywhere.
My experience at amazon has been that people.drink the config kool aid. They make everything configurable but never end of changing in the lifetime of the app anything from the sensible defaults.
At that point you've just added unnecessary bloat.
There is something nice to be said about code that works right off the bat with sensible defaults.
I think people get too caught up on the verbosity side of code. Most devs spend considerable amounts of time debugging, and that's probably the suckiest part of the job. The amount of time spent writing the code (especially for someone who types quickly, as most devs do) is immaterial compared to the pain of trying to debug a nasty bug or perf problem (especially in a complex systems). That's where verbosity helps a lot as you don't have to dig into the library code to understand what's going behind the scenes, because most of the relevant configs and choices made in regards to how to use the framework are laid out in front of you.
As someone who has spent a little time into F# and other languages where currying are the default behaviour for any function, this is the one thing I find myself missing the most in clojure.
Yes, I see why wee need partials and can't have currying by default. It would seriously mess up destructuring. Yes, I get that. I still miss it though.
I think the author of this article has too much exposure to Rails 2.x and not enough core knowledge of Ruby. If he had actually known the language well, then he wouldn't have made any references to the deprecated rails hack alias_method_chain. Instead, he would have used modules to examplify.
The problem is though, when doing it with modules instead then the points he is trying to make totally disappears.
I reject the idea, that because I want to scale, I should have to write thread pools, executors, concurrent queues, and task completion APIs. This is 2012, any idiom, functional or not that doesnt support this as a built in capability is less than worthless. How many times do I have to write, rewrite and debug the obvious just to make a living? And as for the factory pattern, yes you can use it to hide a whole bomb load of configuration and code evil, that you probably should never even have to worry about, if you werent working in such a crap tool
Which is why I love working in a Java EE application server like Geronimo where very smart people have gotten paid lots of money to do that boring grunt-work for me, and I can just spend my time building features and deploy a nice scalable system when I'm done.
The statement "function composition is a degenerate case of the Decorator pattern" in the blog is bizarre to me. OO languages like Java have to resort to the Decorator pattern to handle their lack of composibility; that seems far more degenerate than a language that supports this kind of composition without hacks.
Also this article is 2.5 years old.