This article has so much crazy that I almost don't know how to respond to it.
C++ coddles programmers and Ruby doesn't. Really? That sounds like a sensible thing to say?
C++ avoided reflection because it would be misused by programmers? Only a profound misunderstanding of the design goals of C++, or having never actually used it, could inspire such a claim.
ActiveRecord is a Rails innovation that took decades to realize? What? Why would you think this?
c) Most failed Ruby (which in all fairness usually means Rails) projects never gain any publicity.
d) Ruby projects coming close to the gargantuan scope of some of the bigger Java projects are (thankfully) exceedingly rare (if not non existent).
Lastly I had no clue that Saeed al-Sahaf is now working for Object Mentor, awesome!
Disclaimer: I worked for 3 years as a Ruby developer, including on one of the bigger (> 50.000 loc) Rails projects out there and I encountered some crashing and burning myself (I know I did it wrong).
The author acts as though languages like Ruby, which defers to the programmer to not screw himself, are a new thing. Just because the "enterprise" has been trapped with C++, C#, and Java for the last two decades says less about language designers in general (as the author posits here) and more about the thinking of large organizations in choosing languages to build their software in.
He admits that he doesn't know Ruby deeply, but then he goes on to assert that Ruby people don't seem to create messes that paint themselves into corners.
From what I've read plenty of people have done just that in Ruby because of monkeypatching.
My first Ruby programming experience was with a company sunk by Rubyists, their asinine architecture, and constant monkey-patching.
Rubyists most certainly DO hang themselves with monkeypatching, quite often. You just don't tend to hear about the ones that to that, because the sites they build that way don't survive long enough for anyone to notice, assuming that they even launch them successfully, which is rare.
I think in fact that because Ruby is a much more forgiving language than, say, C++ (the same goes for Python), the average caliber of Ruby developers is FAR lower than the caliber of, say, C++ and Lisp developers. My experience with Ruby and Python developers hasn't been positive -- in fact, working at a Python shop gave me an intense distaste for the language, and my experience at a Ruby shop lead to similar sentiments among a lot of the senior developers who worked there.
You're confused. Rubyists didn't sink your project. Bad programming (and management) did. They would have fucked up the project in Java or Haskell or Perl or anything else.
Oh, and having interviewed C++ developers, I am not sure I can agree that the average C++ programmer is of higher caliber than the average Ruby programmer. C++ is something that these people learn because they think they will get a high-paying job in the financial sector. Ruby is something people learn because they think programming is cool. (Admittedly, the Ruby community's aggressive marketing has gotten a lot of people that shouldn't be programming into programming, and that's bad for their community. But wanting to be a cool Rubyist like _why causes a lot less damage than wanting to have a high-paying job.)
Anyway, Ruby is fine when used by someone with a clue. C++, not so much.
Ruby is something people learn because they think programming is cool.
In my experience this is a myth or a at least a stereotype more prevalent in the marketing of Rails than reality.
I had the strong impression during my stint in the Rails world that Ruby was mostly learned as a signaling device. Young developer slaving away at some random Java or PHP job dreaming of making it big in a startup, with the mistaken believe that simply hanging around the in-crowd will magically catapult him in the top 5% of their profession.
Sure there are good people in this community, but they became good by working hard for a very long time and not by writing a trivial Twitter client using the currently hot testing framework/NoSQL database on their MacBook Pro in a Starbucks after skimming through the pick axe book.
wanting to be a cool Rubyist like _why causes a lot less damage than wanting to have a high-paying job
Why should chasing fame be inherently better than chasing money?
Just speaking from my own experience, I learned way more from cynic mercenaries than from wannabe rockstars and I can't see why emulating e.g. Obie Fernandez will cause less damage than aiming for a position as a technical fellow.
"You're confused. Rubyists didn't sink your project. Bad programming (and management) did. They would have fucked up the project in Java or Haskell or Perl or anything else."
I'm not confused at all. Rubyists sunk the project, plain and simple. I actually still like Ruby, because I had a chance to do some cool stuff with it before our inferior rubyists wrought their havoc, and I agree that had we had decent management, they'd have been fired.
My only criticism of Ruby is that because it's easy, it allows vastly inferior programmers to program, which lowers the bar drastically. I believe that a number of the other senior developers there won't touch ruby at all now, I'm not one of those.
However, given one of the jobs I received e-mail about, I'd say that the salaries that rubyists can get puts them in the worst of both worlds -- it attracts people who want to be cool, and people who want high-paying jobs. Plus, it's easy to learn badly, which sets the bar very low.
My only criticism of Ruby is that because it's easy, it allows vastly inferior programmers to program, which lowers the bar drastically. I believe that a number of the other senior developers there won't touch ruby at all now, I'm not one of those.
Wrong. Every language allows this. Have you ever read enterprise Java or C++ code?
Actually it's entirely true. Ruby allows VASTLY inferior programmers to program. Java and C++ set slightly higher bars; at least with them you have to reach inferior, as opposed to vastly inferior, in order to make stuff work.
I've been coding in Ruby/Rails for 3 years now, and been active in the community, and I haven't yet done that or encountered anyone who had done that. People are warned that monkey-patching is dangerous, they learn how to monkey-patch responsibly, and they act accordingly and responsibly - if something does go wrong with monkey-patching (which, as I've said, seems not to happen), the person who did the patching is aware of why things are going wrong, and doesn't blame it on the language - I can't remember reading a single query in, say, #rubyonrails, where the problem was due to monkey-patching.
But the problem is that all the hacks that don't poorly affect one module do poorly affect the composition of modules. Remember how you can't use Rails and some full-text indexing package because they both monkey-patch Array differently? That's a major problem.
Ruby programmers get themselves into plenty of trouble. Of course, so do Java programmers, or any other programmers. The problem is people, not programming language features. (Except manual memory management, as per C. That's a language problem.)
"From what I've read plenty of people have done just that in Ruby because of monkeypatching."
Tends to be the people who think of modifying open classes as "monkeypatching" who fuck it up because they still see it as some sort of quirk in the language rather than just another part of the language ecosystem.
Another problem I see with creating artificial barriers is the demoralizing effect in can have on someone trying to learn a language. Whenever I'm prevented from doing something because some committee thought it was too complex for me, I begin questioning whether I'm using the right tool. Who knows what other random rules are going to jump out and bite me later.
I would much rather do something the wrong way and be burned by it (and probably learn a lot in the process), than be prohibited outright from using certain techniques.
I like the idea of having the freedom to do everything but also having intelligent defaults to nudge the programmers in the right direction. Optional immutability is a great example of this. When you make something the default, you send the message "this is usually the right way to go about things".
When immutability is optional, you can no longer rely on it to reason about the program. Specifically, you have to verify that any two pieces of code you're worried about actually don't interact, because you no longer have a guarantee that they can't. I think a lot of restrictions are like this.
Edit: To quote a great comment on the article,
> Invariants make software more reliable. Weaker languages have stronger invariants. There’s less stuff to test.
From the outset, I'm aware that programmers who haven't tried to design programming languages will likely disagree with me; but I'll express what I've learned anyway.
The truth, from the language designer and compiler author's perspective: programmers should be protected from their own worst instincts, and programming languages that do this well are of a higher quality than languages which don't. Patronizing, to a point, is good. The flip side is that features whose use should be encouraged should be included in the box, turned on by default, made easy to use, etc.
Ruby is patronizing because it's memory safe. C and C++ programmers don't like this kind of patronization. Ruby, and Python, are unlike C and C++ in this way: they are rigid and uncompromising in their type safety, disallowing unsafe casts and operations.
Michael makes some category errors in his rant. For example, he says:
"[...] we’ve made the assumption that some language features are just too powerful for the typical developer -- too prone to misuse. C++ avoided reflection"
The problem with this abutment is that C++ avoided reflection in its quest for power (or what C++ users thought was power), rather than its avoidance of power: C++ users wanted to hew to a philosophy of pay only for what you need. The problem with that philosophy is that if you make certain features optional, or pay as you go, then the community in general cannot assume that everyone is using available features. Instead, third-party library authors must assume that only the lowest common denominator set of features is available if they want to maximize usage.
Java and C# have avoided multiple inheritance for good reason; and Michael's rant is not a good reason for them to reintroduce that feature, because it remains problematic.
"The newer breed of languages puts responsibility back on the developer"
This is simply untrue, and bizarrely myopic, in my view. The developer always has responsibility, but the responsibility has shifted to different places, as the emphasis on abstraction level in programming languages has shifted. To take Michael's thesis at face value, you would need to believe that C mollycoddled the user and didn't bite their fingers off when they didn't take care of their pointers, or carefully handle their strings, or foolishly used fixed-size buffers for dynamic input.
Of course C put responsibility in the hands of the developer for these things. But guess what? Ruby, Python, C#, Java etc. all take away responsibility from the developer for these things! Michael says that dynamic languages like Ruby hand over the power of monkey-patching etc. to the developer, and that this is a new development; but to get equivalent power of dynamic behaviour overriding in a C application, you'd be using very abstract structures, likely looking like a Ruby or Python interpreter behind the scenes, where you would have a similar degree of responsibility. But not only that; you'd also be responsible for the dynamic runtime, as well as the memory management and all the other unsafe stuff that comes with C.
Interesting points, but I think you're bordering on straw manning the author. I'll grant you that maybe C++ is about as bad an example as you can get of putting responsibility in the hands of the programmer.
In fact, it was behind a lot of Java's decision to take that power out of the developers' hands. I would argue that Java was an overreaction. It corrected the safety issues, but also removed a lot of the ability to create abstractions. Want a global variable? Those are bad, don't use them. Want multiple inheritance? That's bad, don't use it. Want operator overloading? That's bad, don't use it.
The problem is that the fact that those things are bad in some cases (even arguably most cases), doesn't mean they should be forbidden. I know what I'm doing. If I want to use a global variable I should be able to. I know what I'm writing. The language designer doesn't.
In other words, to make a long story short: I don't think he was trying to say that responsibility needs to be in the developers' hands in all cases. But I think that he makes a valid case that newer languages are right in moving some of that responsibility back to the programmer.
Java supports global variables just fine -- in fact, most Java code uses far too many globals for my taste.
Multiple inheritance and operator overloading aren't bad, but at the time Java was designed, nobody had any idea how to use them properly. Python 2 gets MI right (though Py3 fucks it up a bit with magical super()), and Haskell provides a clean implementation of overloading, but it'll be years before these improvements filter through to Java (if they ever do).
> Multiple inheritance and operator overloading aren't bad, but at the time Java was designed, nobody had any idea how to use them properly.
It's not hard to figure out a better the way to do it than not doing it at all with a few hours thought. And in fact Dylan got both multiple inheritance and operator overloading right before Java existed.
Method ordering is the most significant problem with multiple inheritance. You arrive at the simple (just use the lexical ordering of the superclass list) or the more complex (Dylan's method ordering, now also in Python) purely by reasoning logically. I'm pretty sure someone like Guy Steele would be able to figure this out.
Operator overloading: what could possibly be worse than x.multiply(y)? You either have a fixed list of operators that you can override by defining a method in a class, or you have an expandable set of operators for which you can specify the precedence and which way it associates. This is not a hard problem.
There are two problems - everything that you add distracts the human elements, and adds some computational cost.
The human element ranges pretty far, from the neophile trying to grasp every piece, the seasoned veteran evaluating a new language, the maintainer considering what part to improve, or what patch to consider, and so on.
Not every lawn needs a thousand pink flamingos, even if they're better than no flamingos at all.
"Multiple inheritance and operator overloading aren't bad, but at the time Java was designed, nobody had any idea how to use them properly."
I think this is an example of the kind of reasoning Feathers was talking about. Perhaps it did make sense at the time, but that's largely irrelevant. What is relevant is that the idea that programmers aren't capable of making decisions on their own isn't a good idea these days.
For what it's worth, some "patronizing" limits actually enable optimizations that are otherwise not possible.
For example, if anything might be a pointer and you can point to an offset inside a struct, then you can't have deterministically perfect garbage collection and you can't have garbage collection that moves things, for example to compact the heap. Java solves this by only allowing references to "the top of" objects, and stopping you casting pointers. Consequently, Java's runtime knows what's garbage and it can move things around with impunity.
Or consider Clojure, where immutable data means that you can grab a snapshot of a value without read-locking.
The problem with saying "you can do things in whatever way you want in Ruby" is that the moment you have more than one developer it collides rather drastically with the "principle of least astonishment". This is not to say that making everything out of eg Java boilerplate is the solution. The key is writing well-behaved and consistent code, which you can do in most languages.
This argument is common, but I think it's bogus. There's plenty of astonishing Java out there.
More rigid languages don't lead to less astonishment under complexity. A given line of code may contain less astonishment, because there are fewer possibilities for what it might be doing, but what matters is global understanding, not local. The meaning of a program is not the sum of the meaning of its parts. Otherwise we'd all be writing in assembler; it's easy to say what this is doing, locally:
mov eax, [ebp + 8]
The key is writing well-behaved and consistent code, which you can do in most languages.
But the set of problems for which a well-behaved and consistent program is readily writeable is not the same for all languages.
The argument I take is that while you can write well-behaved and consistent code in any language, there is no language that forces you to do so. So picking a language on the basis that it will force/encourage people to write well-behaved and consistent code--even if they don't want to/don't know how to/don't give a toss because they're a cheap contractor six time zones away--is a broken choice.
Programming languages are UIs. Your assertion is tantamount to saying that there is no such thing as a bad UI; that a choice between UIs is a broken choice, because you can still get your job done no matter what UI is there, or similarly you can still do a crappy job no matter what UI you use.
But this is just wrong-headed thinking, to my mind. UI definitely influences user behaviour. One thing currently in vogue is game mechanics; the whole point of using game mechanics in your UI is to encourage certain kinds of behaviours, and it works, dagnabbit! Most UIs that have any coherence to them at all have a way of working with their grain, and many more ways of working against their grain. If they're good UIs, working with the grain ought to reduce your non-essential work load and make working a pleasure. If they're poor, you'll have to work against the grain, fighting the design, to get your job done.
Programming languages are directly analogous to this.
I agree with everything you're saying about programming languages having a grain, and of making some things easy and other things hard. That's the general case.
However, the two specifics we're talking about are well-behaved code and consistent code. Most language in theory make it ridiculously easy to re-use code, to make code DRY. Languages like Ruby actually give you more tools for making code consistent than languages like Java.
Yet, there is a lot of awful Ruby code and some elegant Java code. I've seen some amazing assembler code. It seems that somehow languages don't seem to have a lot of influence over what we might call architecture.
Perhaps the issue is that languages are hand tools, but programs are cathedrals.
p.s. If I had to bet, I'd say there is more of a correlation between languages and architecture than a causal relationship. Paul wrote about this some time ago: http://www.paulgraham.com/pypar.html
The problem with saying "you can do things in whatever
way you want in Ruby" is that the moment you have more
than one developer it collides rather drastically with
the "principle of least astonishment".
I've solved that problem by actually talking with the other developers and agreeing to some common practices.
Not every problem requires a technical solution. Social solutions often work just fine.
That works great for two people. Maybe even 20 people. But try to write a 1M+ line app by meeting at the water cooler.
One of my favorite stories is we had hired a bright young dev many moons ago. Months into his work we were trying to fix this terrible memory leak, and it looked like most of it was coming from his code. We asked him, "Where do you free this memory, we can't find it anywhere -- you clean up everything else." His response, "Wait, how do you free memory?"
I had the opposite problem at a student intern gig. I hadn't experienced smart pointers before, and was double freeing memory accidentally. The fun part was that the unit tests for my code always worked. It was always someone else's code which appeared to break inconsistently after my change ;)
You live, you learn. We picked it up remarkably quickly in the end, basically because the company had their process working right: clean codebase, solid testing, and regular code reviews.
"Weak developers will move heaven and earth to do the wrong thing. You can’t limit the damage they do by locking up the sharp tools. They’ll just swing the blunt tools harder."
Which is so very, very true in my experience.
Designing a language around the idea of protecting weak developers from bad choices is a recipe for failure and mediocrity. Instead, look toward designing in a way that guides experienced or at least thoughtful developers toward greater success.
tl;dr: Don't make bumper cars for people who can't be trusted to drive, make nomex suits and roll-cages for race car drivers.
It's not about being patronizing. It's about recognizing our human limitations: "As a slow-witted human being I have a very small head and I had better learn to live with it and to respect my limitations and give them full credit, rather than try to ignore them, for the latter vain effort will be punished by failure." (Dijkstra, EWD249)
> I understood Dijkstra to be arguing for simple languages, being more susceptible to formal proof, easier to understand, etc.
Yes, he was. In a way, though, isn't Dijkstra's sort of "designing for simplicity" and even the goal of making a language "easier to understand" the very definition of "patronizing"?
I don't think so. For one thing, he wasn't just driving at "easier to understand", he was also after "easier to formally prove correct", which isn't at all patronising.
I also don't find a goal of "easier to reason about" particularly patronising; it makes the rather un-patronising assumption that the target audience is willing and able to reason about the matter at hand.
The language features I find patronising are the ones that make it hard to do things a certain way, regardless of how carefully considered my reasons are.
> The language features I find patronising are the ones that make it hard to do things a certain way, regardless of how carefully considered my reasons are.
Any in particular that you have in mind? Historically, many people have had very carefully considered reasons for using features like goto, type puns, etc. but many languages exclude them as problematic. Is it patronizing to exclude such features even from the toolbox of programmers who carefully consider their use?
My current least favourite patronising language feature: in C#, you can only override a method if the base class lets you.
In answer to your second question, I think it depends. Including gotos in a language is problematic for reasons other than the fact that programmers tend to misuse them. I think it comes down to the rationale. With the C# feature I mentioned, the stated reason was, inter alia, that programmers tended to misuse inheritance. I found that patronsing. That leaving out goto makes it easier to formally prove a language correct I do not. Leaving out goto because programmers tend to misuse it, that I would find patronising.
I apologize for not having the time to make this comment appear less like interrogation and more like a discussion ("if I had time, I would have written a shorter letter"), but I'm in a hurry :) Rest assured, I'm not arguing with you, but genuinely interested in your responses.
> My current least favourite patronising language feature: in C#, you can only override a method if the base class lets you.
My recollection from the design of Java is that such "finality" is necessary to allow the type system to reject code that attempts to override security-sensitive methods: is that not still true of C#?
> I think it comes down to the rationale. With the C# feature I mentioned, the stated reason was, inter alia, that programmers tended to misuse inheritance. I found that patronsing.
What of a language that eschews inheritance entirely, both because it makes reasoning about software more difficult and because the vast majority of programmers cannot use it correctly?
"Patronizing" seems to indicate that a designer considered himself smart enough to use a feature, but determined his "subjects" were not. What of designers who recognize their own limitations and remove features they know to be error prone in their own practice of programming? Is that still patronizing, or does it deserve a different descriptor?
> My recollection from the design of Java is that such "finality" is necessary to allow the type system to reject code that attempts to override security-sensitive methods: is that not still true of C#?
That's still true. The "feature" I has in mind is that methods in C# are not virtual by default. This is in direct contrast to Java where they are. This interview has a discussion on the motivation behind this (IMO incredibly tiresome) feature: http://www.artima.com/intv/nonvirtual.html.
My view is that the real issue is that inheritance is all too often used when composition is more appropriate, which problem this change does nothing to help.
> What of a language that eschews inheritance entirely, both because it makes reasoning about software more difficult and because the vast majority of programmers cannot use it correctly?
If it only makes the language better for "bad" programmers, I would consider it patronising. It also comes down to the intent of the language designer, by definition.
> "Patronizing" seems to indicate that a designer considered himself smart enough to use a feature, but determined his "subjects" were not. What of designers who recognize their own limitations and remove features they know to be error prone in their own practice of programming? Is that still patronizing, or does it deserve a different descriptor?
I think that's worthy of a different descriptor. A failure to anticipate the needs of programmers who are smarter or more disciplined or whatever - which is what such an omission seems like to me - may be a problem, but I don't see it as patronising.
Like the creators of sitcoms or junk food or package tours, Java's designers were consciously designing a product for people not as smart as them. Historically, languages designed for other people to use have been bad: Cobol, PL/I, Pascal, Ada, C++. The good languages have been those that were designed for their own creators: C, Perl, Smalltalk, Lisp.
Fun quote, but factually incorrect. For example, who was the first user of C++? Bjarne himself. He needed the power of Simula, but at the time Simula implementations didn't scale. He took the ideas he found useful in the language and put some into C, as he was building large scale C/BCPL applications.
To me that's the definition of building a language for yourself. You actually write a language that you need to solve a specific problem you have.
And I find his categorization of good vs bad languages somewhat absurd. What actually makes Ada, C++, and Pascal bad? His lack of understanding? What makes C, Perl, and Lisp good? The inverse?
While I have my own personal prefernces with respect to languages, I fully believe a large part of it is familiarity. I've NEVER met anyone who could argue why a particular language was truly bad. They usually are just passionately arguing religion, and I find that tiring. It was cute when I was in high school, but those debates are getting tired.
You have to patronize at least a little bit to write something that is more than a glorified assembly language - otherwise you have no basis to build your other abstractions on.
With C, for example, there's a predefined model for the callstack based around having a fixed set of arguments for function calls and singular return values.
Forth, on the other hand, treats words as simple nested subroutines, keeps data on a global stack, and lets the data carry over from one word to the next. No explicit arguments or return values are necessary.
C can be characterized as "safe," while Forth is "flexible," depending on your use case, but in terms of pure performance both models have strengths and weaknesses.
C++ coddles programmers and Ruby doesn't. Really? That sounds like a sensible thing to say?
C++ avoided reflection because it would be misused by programmers? Only a profound misunderstanding of the design goals of C++, or having never actually used it, could inspire such a claim.
ActiveRecord is a Rails innovation that took decades to realize? What? Why would you think this?