Hacker Newsnew | past | comments | ask | show | jobs | submit | rohern's commentslogin

> "it just seemed strange that seeing a doctor (not even emergency room) cost roughly half as much as an iPad."

That's the banner headline. What a nightmare.


There is a lot of weak-ass criticism going on in this thread when the data -- whatever about its methodology is troubling -- seems to almost perfectly back up what is the common experience among programmers. Yes, copy-and-paste doubtlessly affected the numbers for JavaScript, but I am not at all surprised to see JavaScript where it is.

Does anyone here really doubt that you can get more done with a single line of Python than a line of C/Java/C++? Same for Clojure/Common Lisp/Racket versus Python.

We might not take individual ranking too seriously, and none of this affects language choice when performance is a critical concern (though the spacing between Scala, OCaml, and Go is interesting and relevant to this), but do you guys honestly doubt the trend here? Does anyone have a strong counter-example? It seems like the authors may have had a decent notion with using LOC as a measure. There is no proof of this here, but I am intrigued by it.

The final conclusions in favor of CoffeeScript, Clojure, and Python are again pretty obvious. Is anyone going to suggest JavaScript or C++ is more expressive than any of these?


> There is a lot of weak-ass criticism going on in this thread when the data -- whatever about its methodology is troubling -- seems to almost perfectly back up what is the common experience among programmers.

So?

I mean, really, I can come up with completely bogus metrics all day, and whenever one produces results in a domain that happen to align with CW in that domain post a infographic using it, but that doesn't make that metric meaningful.

> The final conclusions in favor of CoffeeScript, Clojure, and Python are pretty obvious, I would think.</blockquote>

So? A metric that has no intrinsic validity doesn't become valuable just because it produces conclusions which match what you would have assumed to be true (whether based on valid logic or not) before encountering the metric.


Yes, thank you for that 9th-grade science lesson.

The commenters in this thread are writing off the data because...? They decided the measure is bad? When the measure conforms to experience, it's probably worthwhile to look into it. This doesn't mean that correlation implies causation and yada yada 9th-grade science lesson.


> The commenters in this thread are writing off the data because...? They decided the measure is bad?

Yes, because what the measure actually measures isn't a valid proxy for what it purports to measure.

> When the measure conforms to experience, it's probably worthwhile to look into it.

No, if the adopted proxy (here, "LOC per commit") has some sound rationale for being used as a proxy for the actual quality of interest (here "expressiveness"), then it is worth actually getting some results with it for which you have a firm expectation of what those results would look like if you were able to directly measure the quantity (in this case "expressiveness") for which you are using the proxy (in this case "LOC per commit").

If after such testing the proxy -- which you first looked to for reasonableness, and then tested on the "simple" data for which you had a firm expectation of what the results would be for the quality of interest -- seems workable, its worth investigating what kinds of results in returns for things which you don't have a firm idea of where they would fall. (Which is the only reason you actually use a proxy measure for in the first place.)

In this case, the proxy fails at the first test (sound rationale for using it as a proxy for expressiveness), which makes the second test (do the results line up with what you'd expect on a known sample set) meaningless.


Obviously I and the writer of the article disagree with you that it fails the first case.


> Obviously I and the writer of the article disagree with you that it fails the first case.

That's hard to tell in your case, since most of your commentary has been explicitly skipping past the criticism of the failure of the proxy to have a clear link to the thing it was taken as a proxy to say that doesn't matter since the results were about what you would xpect, rather than actually addressing the criticism.

So it sounds like you were failing to understand the first test more than you were disagreeing with the criticism based on it. And, as yet, you haven't stated any reason for disagreeing, just continued to skip to the second test.


The supposition is that a more expressive language lets you do more with a single line of code on average than a less expressive language. The second supposition is that commits tend to be done to gather code expressing a single chunk of functionality in a program, so that on the average commits have the same utility in terms of what they contribute to the source project.

It's clear from this, I would think, why therefore length-of-commit is supposed to be a good proxy for measuring expressiveness.

To be clear -- the reason that it is obvious that I and the author disagree with you on the first case is because your objection was a) an elementary one and a consideration important to all such investigations, therefore it would be considered by anyone doing such an investigation or analyzing one and b) we were disagreeing with you anyway.


> The supposition is that a more expressive language lets you do more with a single line of code on average than a less expressive language. The second supposition is that commits tend to be done to gather code expressing a single chunk of functionality in a program, so that on the average commits have the same utility in terms of what they contribute to the source project.

There's no really good reason to suspect that the second of these suppositions holds to the same degree across different languages (which basically is equivalent to the assumption that development practices are independent of language.)

> To be clear -- the reason that it is obvious that I and the author disagree with you on the first case is because your objection was a) an elementary one and a consideration important to all such investigations, therefore it would be considered by anyone doing such an investigation or analyzing one and b) we were disagreeing with you anyway.

It would clearly be considered by anyone competent doing such investigation, but since your first post on this thread didn't acknowledge the basis and challenge the correctness of the common criticism in the thread based on concerns of this type but explicitly and emphatically stated a lack of understanding of what the complaints were about, it appeared quite clearly that you didn't get it. The assumption of basic competence may be warranted, if only out of politeness, when someone doesn't explicitly state something inconsistent with that assumption, but when they do, that assumption becomes unwarranted.


If you only validate research via checking if it "agrees with experience", then what's the point of doing it in the first place?


That's not what we're doing here. What this article really is an examination based on a set of assumptions about how we can measure expressiveness. This is a difficult thing to measure. You could (I assume) do just as well by polling thousands of programmers and asking them in their experience, which languages are expressive. In the case of this article, the measure of expressiveness used seems to match up very well with a) common programmer experience and b) the intentions of language designers. And we're not talking about programmer experience in 2013. This split between Lisp, C, and Fortran is older than I am.

I do not see anyone offering better measures of expressiveness or suggesting counterexamples to invalidate the results. The criticism here is just "Meh, not impressed".


Vala and C# are two very similar langauges that are on polar opposite sides of the chart. Why? If I can't answer that in a convincing way, my first thought is going to be "because there is another factor involved in the rankings that wasn't accounted for."


I don't think anyone is saying it is completely uncorrelated with the real thing, sure if you split the chart in half the languages on the right will mostly really be less expressive, but this we know without having the chart anyway and the more granular results don't seem trustworthy.


This is exactly the point I am making.


There's enough good data backing up that conclusion that there's no point using crappy data like in the article.

Nobody will argue C is more expressive than Python, but the data in the article doesn't support it. Just because something is true doesn't mean it's okay to support it with shoddy data.

LOC per commit isn't a proxy measurement of the expressiveness of a language. The entire premise of the article is flawed.


The data in the article based on LOC seems to match very closely conclusions based on other data. I do not know that we get to throw out this measure just 'cuz. This is proof of nothing, but no one is offering proof that we should ditch this measurement.


I'd almost bet the author started with the conclusion and went searching for more data to back it up, so it's not a surprise to me that his data backs up his conclusion.

And nobody is offering proof that this measurement is meaningful, so it should be ditched.


I started with a hypothesis that LOC/commit might be an interesting way to compare the productivity of languages, and I frankly had no clue whether it would produce anything useful or interpretable at all. When it did, I figured it was cool enough to write it up, although it's definitely fairly noisy data.


I personally think that the poor methodology of this post would never have survived to see the light of day if the conclusions did not match what programmers expect. Conversely the methodological flaws mean that we should be very careful about accepting the data for any conclusion beyond, "Well, it looks like what I expect."


Fair enough. That is an entirely valid critique. However, I do not think this makes the article worthless. Given the apparent correlation between known expressiveness and the data derived here, it may be that they had a good notion using length-of-commit as a measure for expressiveness.


> it may be that they had a good notion using length-of-commit as a measure for expressiveness.

Replace "expressiveness" with "author's anticipation of reviewer difficulty based on prevailing cultural biases" and you have a different conclusion for how authors size groups of changes that also matches the order graphed.

If you want literal expression-per-line why not just look at compressed_size/line_count for the available body of work in each language?


I think this plot begs a different question... which languages are being abused by the development community?

Javascript is way too expressive for its given position. I also believe ruby is more expressive than python, and yet the plot shows the opposite there as well.

This plot could have some interesting data, but there's far too much noise to really learn much from it.


I agree. I was also surprised by Ruby's position. I suspect the problem is related to that with JavaScript. That is to say, people including code in their commits that they didn't write.


Actually, I think thats exactly the problem. There is a general perception based on anecdotal experience, followed by a non rigorous 'scientific' data 'experiment', followed by analysis of results which throws out all the data which disagrees with the original perception. Look at the actual results, they dont really show a strong correlation with 'common experience amound developers',


>>you can get more done with a single line<<

Maybe not if you follow PEP:8 -- maybe so if you write really really long lines ;-)


:D


>Does anyone here really doubt that you can get more done with a single line of Python than a line of C/Java/C++?

I've never understood this criticism before. Consider this line of python:

x = 3

To this line in C:

DoAllTheThings();

A single line of code is a bad comparison because it doesn't say anything about the underlying language or platform.


These are valid lines of code (plus or minus a ;) in both languages.


That is exactly my point.


Well, then your point is to say that comparing two animals on the strength of their legs is ridiculous because both animals have legs. This is just a non sequitur. No one was confused about the fact that you can make a function call in C that does more than a variable assignment in Python.

We know a single line of code can do a lot of things in both languages. The question is what does the average line do. And this is important, because the occurrence of bugs is directly correlated with lines written rather than the complexity of those lines, and the same seems to be true for programmer productivity. So if 100 lines of Python does more than 100 lines of C, this is an important fact, as in first case more will have been accomplished for the same amount of work and the same debugging effort.


No, my point is that comparing a single line of code is ridiculous. You can only get an understanding of the efficiency of a language when you look at many lines of code. I would think you would need a few thousand lines of code that did something non-trivial before the true picture emerges.

Not even when you come up with an answer will you be able to say something about a single line of code. Meaningful statements can only be made about lots of lines of code.

Except perhaps about vb. Screw that language.


Yeah, sure. That's what an average is. It is taking the performance of a large case and reducing it to a single unit in order to make comparisons.


This is a terrific idea if it means that people become more open to expanding the size of Congress, an idea which is currently met with objections like 'Oh, but then we would need to build a bigger building'.

The Congress of the United States is absurdly small in relation to our population and absurdly unrepresentative as a result. The British House of Commons, which represents a country of 63 million persons, has 650 seats. The Canadian House of commons has 308 members (rising to 338 next election) representing 33 million persons. The entire US Congress, including the Senate, is only 538 seats, representing over 315 million persons. The House of Representatives has not been expanded since 1911 and the population of the United States has increased by 200 million since then.

Anything that makes this more achievable is worth having. The bonus of giving Congressmen the ability to work from their district in direct communication with the electorate is also worth having.


Just below this post is "Tips for Gamifying Your Mobile App".


Agreed. This a great article and great presentation.


I appreciate the interesting reading. Thanks for the useful comment!


I think the most interesting question raised by this article is, "Why are the smart people graduating from college willing to get shafted by startups?"

Why are schools failing to teach engineers about caring for their own careers? I do not think most business majors would put up with kind of crap. They are all taught to negotiate for salary and to get high earnings early as a basis for later demands. They also understand how stock options work and that 0.5% over a four-year vesting cycle with no protection in the event of acquisition is an atrociously bad deal.


Someone else here mentioned something about joining a startup because it can be a great opportunity to learn. Maybe you can get something out of it other than money that is very valuable. This kind of thinking would appeal to a young kid straight out of college and perhaps not to a 40 year old with kids who wants to buy a house. Different strokes...


Negotiating for salary is useless for a business major who cannot get a job. You are also vastly overestimating their average intellectual capabilities with respect to options vesting rules.

CS graduates have it very good right now, much, much better than business majors.


The additional advantage of this age group being that they generally have no experience with salary negotiations or normal work-life balance. In fact, if they're coming out of a top CS program, they are probably used to bonkers work-life balance, which is exactly what a startup will demand of them.


> Startups aren't for financial security. They are for the frontiermen who are hoping to strike it rich in the gold rush. Most don't but some do.

I think you mean "Founding a startup is for the frontiermen..." Being an employee is not. Being an employee is a losing proposition far more often than the 95% you cited.


It's a losing proposition, financially speaking. But it sure is a lot more fun and satisfying working at a startup than a large corporation.

Although there aren't tangible or financial benefits for working at a startup, there are definitely benefits that a large company simply can't provide, like experience and independence. For someone in their early 20's straight out of college, those benefits can outweigh the financial benefits of a big company.


It's a losing proposition, financially speaking. But it sure is a lot more fun and satisfying working at a startup than a large corporation.

Maybe. However, I think most us know people that love working for Apple, Facebook, Microsoft, Google, et al.

Frankly, I think your independence claim is overstated—financial support is crucial. Shigeru Miyamoto emphasized this perspective in a 2012 New Yorker profile: “There’s a big difference between the money you receive personally from the company and the money you can use in your job.”

http://www.newyorker.com/reporting/2010/12/20/101220fa_fact_...

There’re many reasons to found a startup, very few to work for one.


There’re many reasons to found a startup, very few to work for one.

There are good startups out there. I'm almost notorious for startup-bashing because so many of them have awful cultures, but there are decent small/new companies out there. You'll probably have to look outside of VC-istan. VC-istan seems to appeal to the Clueless (see: MacLeod hierarchy) young who will jump at the chance to work "at a startup!" without discrimination.

I know someone who's actually looking at building a startup without equity. He will hold 100% of the stock, at least at first. Variable pay will be profit-sharing. He wants this company to last 20+ years without acquisition, so instead of putting the focus on a future cash-out, he wants people to be rewarded continually for good work.


I think that, as a young engineer, you also have to ask yourself why you want to work at a start-up. Is it to get rich? In that case you should be working somewhere that can teach you the ropes on your way to launching your own start-up (and even then you might be better served working somewhere you can get some domain expertise to separate you from all the fresh college grads). Is it to learn a lot and get a lot of responsibility? In that case you might consider small engineering firms that have moved out of the start-up stage and have real revenues coming in and can offer market salaries and full benefits. You won't get meaningful equity at a place like this, but you won't get that at most start-ups either. If you want to get in on the ground level of a growing organization, consider companies that aren't going the usual angel/VC route. I worked at a "out of the founder's basement" start-up that had some cost-plus contracts, and so could offer market salaries and benefits immediately. Again, no real equity in a situation like that, but on the flip side you know you'll still have a job in a year. And don't overlook working at a big faceless corporation just because it's not cool. Big organizations have the resources to actually train you, they have internal tracks for your career progression, they offer pretty good job security, etc.


How's he doing with that offer? Is he at least paying market salary?


He hasn't started it yet. He's bootstrapping. He plans on paying market to slightly above, and being generous with annual bonuses (the profit-sharing).

Wall Street gets a bad rap for its bonuses, and there are some cultural problems with it, but it's a better mechanism for compensation than what VC-istan uses. Also, I think that Wall Street culture is less horrible than VC-istan. On Wall Street, some people get butthurt about their bonuses, but you don't have teams of 15 programmers where every single one is trying to become VP/Eng and get a real slice.


I've read a lot of your writing and agree with some of it.

Bootstrapping is attractive, but it's very hard to do. Also, as misguided as employee equity grants might be, it has become the standard for compensation in SV; that a company doesn't offer some token ownership definitely will make it harder to hire.

I think it really boils down to, if you want to take part in "VC-istan" as a founder/employee, move to SF, if not, go somewhere else, because it's just too hard to hire/hold down an office/live in SF on bootstrapping.


The 2009 post by Mark Sussman, "Is it Time for You to Earn or Learn?" comes to mind (I think it was recently reposted to HN)

[1] http://www.bothsidesofthetable.com/2009/11/04/is-it-time-for...


It sounds like being an employee at a stereotypical startup is like being a native bearer in an African expedition movie from the fifties.


Talk about an ad-hoc nutritional doctrine.

Zing!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: