Hacker News new | past | comments | ask | show | jobs | submit login
Frameworks Round 6 (techempower.com)
125 points by amarsahinovic on July 2, 2013 | hide | past | favorite | 132 comments



I'm really surprised at the performance of C#... honest no-flame question: how is possible it that Python is beating C#? I've been out of the .NET world for several years, almost a decade, but I always assumed they would be at Java level performance.

edit add: This is really fantastic work


Some caveats to point out:

* Round 6 does not yet use SQL Server. A contributor has provided SQL Server test implementations, but we don't expect to include that until Round ~7 because it will still require some additional work. It is conceivable that the .NET MySQL driver is especially poor.

* We do not yet have a test that is expressly heavy computation with no database connectivity. I suspect that C# and .NET would perform reasonably well on such a test (isolating it from its database connectivity).

* We do not yet have Windows i7 tests. But performance would likely scale about the same as seen between Linux EC2 to Linux i7.

All that said, it's also possible that the ASP.NET implementations of our tests could benefit from additional tuning. We have had a couple subject matter experts give contributions but we'd be happy to receive more.


I was shocked too that ASP.NET MVC performs so poor in all these tests. C# is/can be a very fast language and both C# and ASP.NET support some awesome features that seriously increase the performance of the web app. The whole async/parallellization for example.



I guess you need to change the Hardware to "Win" to see it shining.


Indeed, mono performance isn't great. That is the problem with C#, you need ultimately to run it on IIS for web platform deployment, sadly under anything else it is rather slow.

Also these tests are a bit unfair, as I think they give benefit in some areas to frameworks that cache by default, rather than by request.

Ultimately, what I want to know is total cost, that means hardware, licenses / support agreements, development time, developer ease of hiring.

That isn't to say these aren't interesting, but they are rather apples to oranges. As I posted in the other flamebate thread about MS stack > *, I have set a note in my calandar to help improve the example for MVC.Net (https://news.ycombinator.com/item?id=5976540). Hopefully I will be able to come then when not in an exclusive agreement, they suck I know, but it pays my mortgage!


I missed your later reply in that other thread. If the node.js implementation is caching results from the database, it should be modified to not do so. The original implementation did not, but it has seen several updates from node.js SMEs. Can you point to what makes you suspect that it is caching? The requirements do not allow for caching [1]. Future test types (perhaps in Round 7) will allow for caching, but none of the present test types allow for it.

[1] http://tewebdev.techempower.com/benchmarks/#section=code


Even then it tends to run behind Go and Node.js in most tests. Once Round 7 includes MSSQL we'll see a more accurate picture.


I don't find it a problem seeing it running behind Go and Node.js, I believe C# is another beast so it's a trade-off.


Java still beats it on Windows.


I don't see a framework-to-framework comparison. The .NET equivalent to a servlet is a "HTTP Handler"


Finally! I was disappointed when this round was postponed.

I couldn't wait to see how well my framework (Jester; https://github.com/dom96/jester) performed, not only because it was my framework but also because it is written in Nimrod. I am a bit disappointed by the results but at least there is a lot of room to improve now, I didn't have much time to properly implement concurrency for Jester so I opted for just spawning a couple of processes which is definitely not ideal.

I hope that Round 7 will go better for Jester. In the meantime I hope that you will check out the Nimrod programming language (http://nimrod-code.org) despite the results that my framework achieved.

To the team that carried out these benchmarks, thank you!


Finally indeed! In future rounds, I hope that we can spend less time trying to resolve problems--if there are problems, we'll just post the results and move on to the next round. Better to keep iterating, and allow the experts who work with each framework resolve issues as they come up.

Thanks very much for contributing Jester to the test! I really appreciate your spirit and hope to see it improve in Round 7 as well. I'm looking forward to seeing the next pull request. :)


Indeed, although people may be disappointed if their framework fails and they have to wait another month for another round of results, but then I suppose the next round will come a lot quicker if you have less issues to fix.

No problem at all. Thank you for merging my process spawning silliness. You will definitely see a lot more improvements from me :)

EDIT: A little issue I just noticed: the jester "Front-end server" is shown as "None" when in fact it's "nginx".


I think they would be less disappointed if it meant a consistent schedule. These benchmarks are really a lot of fun, not sure if they change hearts and minds, but they are beautiful and fun.

I think being time-boxed rather than implementation-boxed make sense. If you miss a month, you got 30 days to fix it.

Also, they could only "update" the test rules quarterly or every 6 months, give the authors a bit of time to respond.


Thanks for posting this, amarsahinovic.

This is the latest round of our ongoing project to benchmark the performance of fundamental services provided by modern web application frameworks. With the continued contributions of the developer community, the number of frameworks and platforms included is now 74! Round 6 also adds the often-requested plaintext test with HTTP pipelining (thanks in large part to William Glozer of Wrk fame).

The results web site also now includes a "framework overhead" tab that allows you to compare a framework versus its underlying platform. This was originally conceived by a Hacker News user named goodwink [1].

Like the round before, Round 6 gave us some technical challenges, including a period where we were unable to collect any latency data. Today, some problems persist, such as the Dart test (yes, there is server-side Dart) failing to complete for most of the tests. However, rather than continue to defer Round 6, we've decided it's better to move forward and use the round-over-round progression to iterate, fix, and improve. No problem that exists today is set in stone.

To that end, as always, I would love to hear any feedback, advice, questions, criticism, and especially contributions. And we hope you find this information helpful!

[1] https://news.ycombinator.com/item?id=5454970


Just want to say thanks for doing these tests. I think this is the first time we have such a thorough overview of the basic performance of web frameworks.


One request: I can't help but wonder how JAX-RS / Jersey on Tomcat / Glassfish perform. JAX-RS on Java EE is a serious contender to Spring, and I really want to know how a JAX-RS service (Servlet based) is compared to Spring and to raw Servlets.


Luminus appears to be a clojure framework, not a php framework.

Thanks again for posting these tests.


Oops! My mistake in classifying. Many apologies to fans of Luminus. I've just corrected this.


are you going to include gevent (I.e. bottle/flask + gunicorn with gevent workers, and uwsgi + gevent workers) ?


We would certainly like to include that configuration. If you have time and are willing to contribute, we'd love to receive that as a pull request.


I've followed yogthos' work on Luminus since the beginning.

It doesn't merit mention independently of Compojure. It's just some sane defaults for pedagogical reasons for getting started writing web apps on top of Compojure. It adds nor removes no runtime or anything else. As it currently exists, it's a Leiningen project template generator. That's about it.

Luminus is isomorphic, for the purposes of a benchmark, with Compojure.

I'm sure yogthos will be flattered when I tell him his web app templating thingy was included though.


Sunglasses! Good to see a familiar name.

Incidentally, yogthos should be aware that Luminus was included: https://github.com/TechEmpower/FrameworkBenchmarks/pull/293


Disturbing results since I so frequently have used Sinatra for hosting REST APIs for customer projects. Rethink needed.

For my own projects I use Clojure + Compojure and more recently added Node.js (often with Meteor) and was pleased to see that they benchmark well. One thing I especially like about Compojure with the Hiccup HTML generating library is that I see syntax errors immediately in the editor. I suppose that Scalatra with embedded HTML would provide the same benefit.


Maybe benchmark it yourself? I think something is amiss, it should easily 'hello, world' at 2x higher rate than Rails. Poked around the setup a little a while back but nothing jumped out.

Hunches: it's either going through webrick or not running with RACK_ENV production



Wonderful, thanks for the continuous updates. I hope you will keep gaining momentum around this initiative; having a solid community-supported set of benchmarks which routinely improves would be a true asset.

By the way, I would like to offer a UX suggestion: it would be nice if you could select one or more frameworks and have them highlighted in the result listings, so as to be able to notice their relative position at a glance.


Thanks for the feedback. The highlight is a good idea and I've created an issue for the suggestion [1]. I'll try to get that added in time for the next round.

[1] https://github.com/TechEmpower/FrameworkBenchmarks/issues/35...


Honest question: Why are most of the frameworks that I know of in bottom half? When does speed becomes relevant?


Three reasons as to the hierarchy:

1) Most popular frameworks you would encounter run on higher level dynamic languages such as PHP, Ruby or Python. These languages are a decent bit slower than performance optimized languages. The languages towards the top are all C, Java, Go, etc.

2) The frameworks towards the top are generally lower level - they do less automatic 'magic' than the more popular frameworks that handle things like validation automatically.

3) In general, it's more likely you will have heard of older frameworks than newer ones. The frameworks towards the top are generally all less than 2 years old (excluding Java servlets which have just always been decent). This means they make use of newer designs such as non-locking and asynchronous connection handling.

When does speed become relevant? It's actually always relevant as soon as your product gets traction. For example, 1 server at the top of that chart may be able to handle the same load as 10 servers towards the bottom. If you're using a framework at the bottom and paying heavily for 10 servers, you could increase your startups runway substantially by swapping. However, swapping may cost more in engineer-hours than the servers.

Basically, the relevance of speed is directly related to your margin. If your margin is high (investment banking maybe?), then speed is almost irrelevant, you can afford a whole building of servers. If you're competing heavily on mass market web apps, speed is fairly critical as you can out price competition.


Great response, RyanZAG.

To add to this, I'd encourage you to look at the source code used to implement the tests in the higher performance frameworks. Ryan's third point is very relevant: you may find that modern frameworks leverage both high-performance platforms and much of the pragmatic thinking that was introduced a few years back in the frameworks with which you are more familiar.

Source code link: https://github.com/TechEmpower/FrameworkBenchmarks/


I would go even further - performance is relevant even if your margin is high, because high baseline performance means you can trade off performance for speed of development and/or business considerations elsewhere. Scaling also doesn't come for free and it's harder, not merely more costly, to manage more servers.


> Scaling also doesn't come for free and it's harder,

that's a very good point.Scaling definetly doesnt come for free, especially when scaling databases.


Thank you for great response.

But for me all this seems quite biased towards small Frameworks/Platforms. I will hardly ever serve one JSON response form full stack framework. If I just made some C server that would do those tasks very well it would be first in this list but very far of anything of value.

Albert Einstein once wrote, “Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.”


> But for me all this seems quite biased towards small Frameworks/Platforms.

If one framework does less than another while still fulfilling the requirements of the test, then it ought to outperform that other framework, right? I don't see that as a bias; it's the expected result.

I said "does less" rather than "is smaller" there. I think that's an important distinction. It's theoretically possible for a framework to provide a feature at zero cost to users who don't use that feature (in responding to a given request). Some frameworks don't do that.


There is no evidence that Einstein ever wrote or said that [1].

[1] http://quoteinvestigator.com/2013/04/06/fish-climb/


> Basically, the relevance of speed is directly related to your margin.

True though the average PHP developper is usually cheaper than the Go or Java one. Paying people is usually more expensive than buying server instances on Amazon.

High end PHP dev is as hard as Java .. The problem is that the high end PHP frameworks are at the bottom of the list...

> The frameworks towards the top are generally lower level - they do less automatic 'magic' than the more popular frameworks that handle things like validation automatically.

Not really Spring,Struts,etc ... are not really low level.

There is a paradigm shift today. There is more and more demand for realtime apps. And these cannot take advantage of http proxying or caching. So the solutions that perform badely at concurency will not be seen as relevant in the future.


What do you suggest one should keep an eye on to stay relevant in the future?


Thanks for continuing to do these. As far as I know, no one else is doing something like this in such a comprehensive way.


An honest confession from someone inspired by these comparisons. I've been following these benchmarks very closely right from round one and ever since I've been waiting to see my favorite framework, Ruby on Rails perform decently to some extent. I waited till the last round to see some improvements and while I DID see some marginal improvements, it wasn't as expected.

This then provoked me to do some basic math. Imagine a real world startup scenario wherein you need to your application to deliver API responses (JSON) or even ordinary HTTP responses. If you use a framework from the bottom of the list (like Rails), then you may be able to serve, say, 'X' customers on your site using your application at a time. Now, if you use something from the top of the list, you may be able to serve roughly about 20 times (20X) more number of customers on a single server. This not only means less cost, it also means you can worry about scaling and stuff much much later than you would have to, unlike on a low performant framework.

For this reason, I decided to conduct an experiment for myself. Pick up a framework from the top of the benchmarks and experiment with the learning curve and the language. Well, after carefully evaluating, I decided JAVA wasn't for me (too much of a beast). But Scala seemed to hit the sweet spot with excellent language (1 line of Scala = 10 lines of Java on an average) features along with the trust of the JVM.[1]

So, I chose Scala and started experimenting with it. I started with the play framework and was not fond of certain decisions that were made within the framework. Also, the play framework is nothing like Ruby on Rails (but it claims to be so) and was more of self-serve and also quite heavy for my taste. Hence, I went even farther up the list and chose Scalatra. Generally, framework selection is a matter of personal preference and each one should choose whatever aligns with their philosophy. For me, it was scalatra.

Frankly, I haven't been able to ship out features at the speed at which I used to, with Rails, but there was something very important that I learnt too - Don't get comfortable with full stack frameworks without understanding what goes underneath. Rails suddenly seemed like an enemy to me because it was doing everything so magically that I literally had NO IDEA until I had to implement them myself with Scalatra. For example, do you know what AES-GCM is? Do you know HOW the secure cookie store in Rails works? What algorithm it uses?? Do you know you have to change the secret pretty often??? Do you know how authentication in Rails (Devise) works??? Can you build it from scratch???? I knew none of these, until I had to implement them myself for my application on top of Scalatra. It was seriously a pain because for the first one week, I did absolutely nothing, in terms of progress with features. But later, I started loving this way of development rather than relying on 'magic' frameworks. Now 'magic' seems scary for me because I cannot actually see what's happening underneath.

So, to cut a long story short, Should you choose the fastest framework? Or should you follow the 'ship it first' policy with an average framework??

My advice - Start with a full stack framework, something like Rails or Django, but also in parallel, try to understand how every bit of your framework works and try to implement it into a micro framework based on something from the top of the benchmarks (like Go, Scala, etc.). Most importantly LEARN. Something new everyday! And slowly shift your development towards these high performant frameworks as and when you see fit.

This is just based on my experience :)

[1] 1 JVM = 10 Thins, Source: http://www.slideshare.net/pcalcado/from-a-monolithic-ruby-on...


> For example, do you know what AES-GCM is? Do you know HOW the secure cookie store in Rails works? What algorithm it uses?? Do you know you have to change the secret pretty often??? Do you know how authentication in Rails (Devise) works??? Can you build it from scratch????

If I answered yes to literally all of these questions, am I still allowed to use rails?


Allowed and encouraged! It's good to both have standard tools that not everybody needs to reimplement differently (and often poorly) and to understand how those tools work. Everyone should take the time to get at least highly acquainted with the libraries they depend on. I suppose this was a good way for your parent to learn that lesson, but it's also a good example of why benchmarks aren't everything for frameworks; their design and community matter a lot.


I had similar ambitions, and I wanted to play with the top performers, so I started looking at the Java (gemini) ecosystem.

And after an hour of surfing I remembered why I didn't follow through the last time I had this idea. The Java ecosystem is complex, I mean really complex.

Here are some of the terms I came by that look like I should grasp before trying to make something: servlet (various versions), servlet container (resin, tomcat, jetty, ...) , OSGi, jar, war, wab, jboss, glassfish, java standard/enterprise edition, where are classes being imported from, configuration files for different containers, JNDI, classpath...

Are there any resources for Java like http://mirnazim.org/writings/python-ecosystem-introduction/ for Python or http://www.phptherightway.com/ for PHP that would help someone to start navigating all those java-isms?

"Getting started" article for developing web applications with gemini would also be helpful.


Hi gog,

I'm sorry you spent time trying to get started with Gemini, because it's our internal framework that isn't released yet. That would have been frustrating for you!

From our "Motivations" section: http://www.techempower.com/benchmarks/#section=motivation "Why include this Gemini framework I've never heard of?" We have included our in-house Java web framework, Gemini, in our tests. We've done so because it's of interest to us. You can consider it a stand-in for any relatively lightweight minimal-locking Java framework. While we're proud of how it performs among the well-established field, this exercise is not about Gemini. We routinely use other frameworks on client projects and we want this data to inform our recommendations for new projects.


Thank you,

now I understand why I couldn't find anything useful on this page http://www.eclipse.org/gemini/web/ :)

Nevertheless I am still looking for good resources on getting started with Java (ecosystem wise, not the language itself).


Have you considered starting with Grails? It's a much easier jump into the "java-ish" world. The main benefit is you can choose to run a grails application and rely on java jars for when you need performance boosts.


We've been using Groovy internally for a few years for prototyping and experimentation. I thought about slowly introducing Grails as an alternative to Spring for some projects. My experience was that it vaguely reminded me of Rails but needed a lot more polish. I had a huge issue trying to chase down archaic stacktraces and found that there is little developer momentum pushing it forward. I usually found myself googling and reading random blogs to figure out simple things I couldn't infer from the docs. Where as Rails magically makes things work, I found that a lot of times Grails made things magically not work. I wouldn't personally suggest it for anything customer facing.


You could use JRuby and Rails if you need the JVM.

Groovy's OK for wraparounds testing and/or running Java classes, what it was originally intended for, bringing closures and terser list/map syntax (altho JRuby et al have those too). It's all rather slooooow though. Grails uses Groovy's MOP which was added later, but I don't use them much. I toyed with Groovy's interceptors and categories when they were added, even wrote notes on them, but they broke in the upgrade from 1.5 to 1.6. There's a lot of such cruft in Groovy which just lapses from version to version through lack of use and support. They brought in AST transforms to get programmers to extend Groovy's functionality, but when they do, the Groovy managers wil swipe the code and pass it off as their own in the next version of Groovy (e.g. Groovy++ written by Alex Tkachman was cloned as the static compilation in Groovy 2.0 and the static traits in the upcoming Groovy 2.2).

tldr... Use grOOvy for quickies handling Java classes, and for Grails where it's required, but look at a serious programming language for anything larger.


I would much prefer using JRuby/Rails but we have a bus factor of 1 with Ruby :(.


In previous rounds there were some discussions that you were considering open sourcing Gemini. Is that still on the table or have you decided against it?


It is still on the table, but we haven't made much internal progress on this yet. Hopefully we'll get there before too long.


gog, I totally agree that the documentation for the Java ecosystem is awful. My theory is that the relevant information for today is lost under 18 years of noise.

What I'd like to see: Java should be renamed with every major version [1], in much the same way Ubuntu, OS X, and Android are named.

[1] http://tiamat.tsotech.com/rename-java


It doesn't appear that there's been any real speedup on the Ruby on Rails results for the last several rounds, so... why is that? Does nobody in the community care about speed? Is there no way to speed up Rails? Is it not clear where the bottlenecks are? Are the profiling tools not good enough? I'd sincerely like to know if someone has taken on the task and just been frustrated, or if no one has actually tried.


We have received numerous pull requests from a spectrum of communities, but the Ruby community has been fairly quiet on that front. For example, not a whole lot has changed with the Rails test since March aside from us adding implementations of the newer test types:

https://github.com/TechEmpower/FrameworkBenchmarks/commits/m...

That said, there isn't a whole lot of code that composes the test. Here's the controller:

https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

And here is the Python script that is used to start up nginx and Unicorn for the test:

https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

And JRuby on Resin:

https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

We'd love to get more pull requests here.


I believe the '1 JVM = 10 Thins' remark you link to refers to the amount of servers they needed to run. Not the amount of code. Also Thin is a Ruby webserver, not Scala nor Java.

If the Computer Language Benchmark is any reference, 10 lines of Java ~= 8 lines of Scala : http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...

And 10 lines of Scala ~= 6 lines of Ruby : http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...

There's a pretty dramatic performance drop noted going from Scala to Ruby though.

(or 10:8:5 Java:Scala:Ruby if you wish)


I've been on the other side of that equation. Building all that stuff by hand leads to incredible waste. Bike shedding, reinventing the wheel and substandard results.

Do I like Active Record? No, not really. Do I want to fight with the entire development team to force what works best for me onto everyone else? No, not really. If we are productive with AR, I'll accept it.


For such a young language, I'm really surprised by the performance of Go there. Quite awesome.


Round 4 of these tests is what triggered my dive into Go this past month, and it has been a revelation. If you add in the concurrency capabilities, the language simplicity, and nice balance between overly terse syntax (Python one liners...) and overly verbose typed languages (Java), the whole package is even more impressive.

The only real thing I think it is missing is a "high productivity" application framework like Rails to elevate it from "damn, I'm impressed" to "it would be ill advised to use almost anything else".

</reluctant fanboy rave>


Have you tried modern ML-style languages? e.g: Haskell, OCaml, F#?

They also combine the nice conciseness of Python, with great concurrency (better than Go) and a lot more safety (less late night crashes).


With respect, and I say this having experimented with Haskell, OCaml, any many other functional languages as well, just, no. Those languages come at a MUCH higher mental cost than go. Go is wonderful because writing it doesn't feel like I'm attempting a CompSci doctoral thesis. I can also say I have yet to see the Go runtime crash, and I am using in production.

I also challenge your "better concurrency" claim.


What do you mean by "mental cost" here? Cost of initial learning? Then I agree... But learning a language is an O(1) cost that takes a few weeks to a few months. We program with the language for many years.

I never said the Go runtime will crash, but your programs will, because of unsafe nullability, mutability in the concurrently shared state, and various other problems in Go.

Better concurrency in Haskell: In addition to having light-weight threads like Go and channels like Go, Haskell also has software transactional memory which Go lacks. Haskell also has `par` annotations on pure expressions that allow parallelization with guaranteed lack of effect on program semantics. Haskell also has data parallelism.


Haskell (and most of the other languages mentioned) is exceptionally clever. This, above all else, is it's downfall. For me, especially when tasked with building a high productivity development team, clever code is a ticking time bomb. It's easy to write, but hard to maintain and modify. It requires more mental ram to analyze any given piece of code, and is much more difficult for multiple programmers to contribute to. It's tiny, to be sure, but, again, I think that is negative, not a positive. Maximum clarity is not minimal code.

Go, on the other hand, is not clever. It's boring as hell, honestly. This is a Very Good Thing® when it comes to building out a dev team, and I feel, the single biggest reason Google put the resources into creating it.

As for the crashing, of course I see some dangerous areas. Educating developers on avoiding a small regions of pitfalls is much easier than managing a team of clever coders though.


I disagree that Haskell is "clever". I think Haskell is "smart".

Using "Maybe a" when you have a nullable value is smart, not "clever". It aids maintenance and readability, not hampers it.

Using pattern matching is the same.

Haskell builds on mathematicaly simplicity, which makes things hard to grasp at first. This may be mistaken for cleverness.

Unless I'm misunderstanding you -- can you give an example of something clever in Haskell?


I'm not a haskell expert (obviously), but the time it takes me to parse things like this is time I would rather spend reading 5-10x the number of lines and getting the meaning right away.

  let loeb x = fmap ($ loeb x) x in 
  loeb [ (!!5), const 3, liftM2 (+) (!!0) (!!1), (*2) . (!!2), length, const 17]

(btw, I have no idea what the hell this does. Something to do with spreadsheets, apparently. I found it on http://www.haskell.org/haskellwiki/Blow_your_mind, which has enough cleverness to make me want to cry)

I've seen cleaner and more readable code in production haskell, but this sort of thing happens enough that I'm very cautious.


I think the first paragraph of the page linked explains why that code is so impenetrable. You will not have to read or write Haskell like that, ever. Good find though! On a similar note, the Haskell community is amazing. You can learn absolutely everything between the Haskell wiki, freenode irc and hackage. How great is it that EVERY lib/package/framework is documented on hackage in exactly the same format? Very great. Coming back to JavaScript is a bummer :(


I don't think you'll ever find this kind of code in production.

With years of Haskell experience, I rarely encounter code that is hard for me to read. This is a good counter-example, and is not typical code.

I read 10 lines of Haskell code roughly as fast as I read 10 lines of Python code -- yet the 10 lines of Haskell code can pack much more useful information.

So Haskell is a great tool for more efficient communication between programmers, who can write shorter messages to each other to convey the same information.


I would go so far as to say Haskell is better.

But, then you gotta remember, "worse is better" (New Jersey style), and Go is worse. New Jersey style: (1) be simple in both implementation and interface (2) correct, be correct as long as it doesn't make it more complicated, simple is more important than correct (3) consistent, when you can be consistent, do be, but simplicity is more important and (4) completeness, be as complete as you can, but realize completeness can be sacrificed for any other objective.

I think C and Go share a lot of New Jersey style -- Go is great in simplicity (remember simple isn't easy). I think Go will continue to go quickly because of these attributes.


> I never said the Go runtime will crash, but your programs will, because of unsafe nullability, mutability in the concurrently shared state, and various other problems in Go.

Personally, these problems are not what are a time-sink for me. The problems that I spend a vast majority of my time on tend to fall into two categories.

First, there are design problems. i.e. how do you model your data to be queryable, how do you architect services to make them resilient to machine failure, how do you monitor services, how do you route logs, etc. With these problems, Haskell doesn't help here. Go doesn't help here either.

Second, there are operational problems. i.e. your FS might randomly corrupt some data you stored previously (remember to checksum), your caches might get out of sync (fun problem if you ever go multi-DC), your service has shoddy backoff and DDOS's a failing downstream service, etc. With these problems, Haskell doesn't help here. Go doesn't help here either.

Am I saying that there's no value to the fact that Haskell solves these at compile time? Of course not. Just that the relative amount of time that would save for me is not my deciding factor when picking a programming language.

Another way to think about it is that people have (I include myself in this category) written large-scale projects in dynamic languages which have all the problems you mentioned plus a few more. And yet, the developers at Twitter, or Facebook, or Reddit don't spend the vast majority of their time face-palming at type-errors or NPEs, or ConcurrentModificationExceptions. They have other concerns.


I think people tend to underestimate the amount of time they spend on problems, when those are uninteresting. A single null dereference error may be trivial to fix, but the overhead around fixing any bug may be costly. For example, you might need to deploy a whole new version, rerun test suites and have a bunch of meetings. Then after all of this, you might remember that the problem only costed you 10 minutes of fixing the bug.

When you implement a red-black-tree, do you not spend any time testing your invariants? Or figuring out the bugs? That's a good example of where the Haskell type system can simply give you compile time guarantees saving you from bugs and from having to test them.

I also worry about the problems you mention, many of which Haskell indeed doesn't help much with. I don't see how these problems (which you solve once, usually by reusing a library) are what costs the majority of the time. A big, non-trivial implementation has bugs and will require testing. Haskell will have less bugs, and require less tests. This is a pretty big deal.

It's weird that you bring Twitter as an example, as they canned a dynamic language solution for a static language that is in many ways very similar to Haskell.

I've seen multiple large scale projects in dynamic languages. They all fail to scale well, both performance-wise and maintenance-wise. Statically-typed systems scale far better along both of these axis.


> I think people tend to underestimate the amount of time they spend on problems...

Again, the implication is that people who use languages with looser type-systems than Haskell spend lots of time dealing with the problems that you mention. In my experience, that is not the case. You can claim that I'm underestimating the impact of such bugs if you'd like.

> When you implement a red-black-tree, do you not spend any time testing your invariants?

Testing is, as far as I can tell, the reason that these problems don't come up. In the process of solving all of those problems I listed above, you tend to write a bunch of tests that exercise the same code that is run in production.

> It's weird that you bring Twitter as an example, as they canned a dynamic language solution for a static language that is in many ways very similar to Haskell.

I assure you that this transition is far less complete than you might think, and even when complete, will have more Java than Scala. Note that both of these languages allow shared mutable data, both allow null pointers.

> They all fail to scale well, both performance-wise and maintenance-wise. Statically-typed systems scale far better along both of these axis.

Oh, I agree. Go is statically-typed. However, you're advocating going even further along the spectrum, and I'm saying that going further brings diminishing returns, and starts costing you in terms of available engineers, and your productivity in writing code. I think Go occupies a good point along this spectrum, where I can write robust code without arguing with a compiler.


> Testing is, as far as I can tell, the reason that these problems don't come up.

Testing is a cost. It is more code to write, more code to maintain. It gives no guarantees about correctness, even of the exact feature under test.

Consider the 10 lines starting from https://github.com/yairchu/red-black-tree/blob/master/RedBla...

They guarantee the correctness of the invariants of the Red Black Tree, and they easily replace hundreds of lines of test code which give no guarantee.

We might still need to write tests, but a lot fewer of them. Also, those we write will give us far more "bang for buck" because we can use QuickCheck property testing.

> Note that both of these languages allow shared mutable data, both allow null pointers.

Scala shuns null - and only has it for Java interop. All Scala developers I've discussed this with program as if null did not exist, and never use it to signify lack of a value.

> I think Go occupies a good point along this spectrum, where I can write robust code without arguing with a compiler.

When "arguing with a compiler", you're really being faced with bugs now rather than later, when the code is no longer in your head - or worse, in production. If the type checker rejects your program, it is almost certainly broken, and it is better to "argue with a compiler" than to just compile and get a runtime error later.

Availability of engineers is a good point, though as a Haskeller, I know both companies seeking Haskell employees and Haskellers seeking employment (preferably in Haskell).

Consider the flip-side of the engineers availability is the "Python paradox".


> Testing is a cost. It is more code to write, more code to maintain.

I'm not saying you write tests to protect against NPEs. I'm saying that you write tests to ensure correctness of your code, and as a side-effect NPEs are flushed out of your code. This is my theory explaining why NPEs are not a timesink for me.

> They guarantee the correctness of the invariants of the Red Black Tree, and they easily replace hundreds of lines of test code which give no guarantee.

Code with mathematical invariants seem like such a niche area, though. The average type of test that I write is "ensure your service calls service X first checking for values in a cache; ensure that it can handle cache unavailability, service X unavailability, cache timeout, service X timeout". Maybe you could figure out a way to encode that in a typesystem, but I'd wager that it wouldn't be as readable as the equivalent written as a test.

> Scala shuns null - and only has it for Java interop. All Scala developers I've discussed this with program as if null did not exist, and never use it to signify lack of a value.

That's exactly right. Languages with nulls, and mutable shared state are perfectly reasonable to use if programmers do the right thing by convention.

> When "arguing with a compiler"...

I misspoke. I was thinking of the learning process, not the process of writing code.

Although my current focus is backend systems, I have worked across the spectrum in the past. I've worked (not including languages I dabble at home) in JS (for browser UIs), Java (for Android UIs and servers), Obj-C (for iOS UIs), Python (for servers and scripts), Scala (for servers), Ruby (for servers), and C++ (for servers). These languages represent a wide spectrum on the dynamic to static spectrum. I don't find myself writing much safer code when I go from JS to C++. The same for a shift from Ruby to Scala. The same for Python to Java. These shifts are relatively large on the type-system spectrum, as they go from dynamically typed languages to statically typed ones. You claim that a significantly smaller shift, removing nullability from a language would be a big deal for reliability. That doesn't seem likely to me.


> I'm not saying you write tests to protect against NPEs. I'm saying that you write tests to ensure correctness of your code, and as a side-effect NPEs are flushed out of your code. This is my theory explaining why NPEs are not a timesink for me.

You never know when you have enough coverage to rule out NPEs or any other bug. And to get confidence about lack of NPEs you want to have coverage of all lines involving dereferences, which means you need near 100% test coverage to have a reasonable level of confidence. With Haskell, I can be reasonably confident about my code with very little tests.

> but I'd wager that it wouldn't be as readable as the equivalent written as a test.

Generally types are far more concise and guarantee more than tests. I find 5 lines of types more readable than dozens or hundreds of lines of tests.

> That's exactly right. Languages with nulls, and mutable shared state are perfectly reasonable to use if programmers do the right thing by convention

I think Scala users will generally disagree with you. They'd prefer it if null was ruled out in the language itself. That said, Go convention is to use nulls, not shun them.

> I don't find myself writing much safer code when I go from JS to C++. ... Ruby to Scala ...

Your code is much safer simply by construction, so I am not sure what you mean here.

> You claim that a significantly smaller shift, removing nullability from a language would be a big deal for reliability. That doesn't seem likely to me

Hitting type errors at runtime, null dereference crashes in Java and "NoneType has no attribute `...`" in Python is pretty common IME.

I do think non-nullability aids reliability, but that having sum types, proper pattern matching and parametric polymorphism aids it even more. And Go lacks all of these.


> You never know when you have enough coverage to rule out NPEs or any other bug. And to get confidence about lack of NPEs you want to have coverage of all lines involving dereferences, which means you need near 100% test coverage to have a reasonable level of confidence.

This is not true. In the example I spoke about above, if I take a CacheClient, and a ServiceXClient when my type is being constructed, assign them to local fields and then never modify that field again, then I don't need to exercise every dereference of these fields, just one. And again, I don't test that my code handles NPEs, I test that my code does what it is supposed to, and in the process of doing that, NPEs get flushed out.

> Generally types are far more concise and guarantee more than tests. I find 5 lines of types more readable than dozens or hundreds of lines of tests.

I think you are viewing this through red-black-tree colored glasses. Specifically, you believe that a lot of code has mathematical constraints the way that example did. To me, this is an extremely remote possibility. I think if you tried to encode even the smallest real-world example of this, say a service implementing a URL shortener, you would run into a wall.

> Your code is much safer simply by construction, so I am not sure what you mean here.

I should have said more reliable.

> I do think non-nullability aids reliability, but that having sum types, proper pattern matching and parametric polymorphism aids it even more. And Go lacks all of these.

Truly, it baffles me that people still harp on the reliability aspect. It is quite likely that every piece of software you use day-to-day is written in a language with nullability, without pattern matching, and no sum types. Most of that software probably doesn't even have memory-safety (gasp!). Probably every website you visit is in the same sorry state. I'm sorry, but your arguments would be far more convincing if the world written in these languages were a buggy, constantly crashing hell. It's not.


I guess to progress from here we'd need to laboriously compare actual example pieces of code. For example a URL shortener is going to be easier to write safely in Haskell, where I am guaranteed by the type system not to have 404 errors in routes I advertise, or XSS attacks.

Also, in my experience, computer software is buggy, unreliable, crashing, and generally terrible. I think people who view software differently have simply grown accustomed to the terribleness that they can't see it anymore.

Also, reliability is interchangable with development speed. That is, you can trade one for the other. So if you start with a higher point, you can trade more for speed and still be reliable. In unreliable language, typically reliability is achieved by spending more time maintaining test code, doing QA, etc. In a reliable language more resources can be spent doing quicker development, and less on testing and QA.

When you see a reliable project implemented using unreliable technology, you know it's going to scale poorly and require a lot of testing.


I'm a little confused here. You realize that Go is also statically typed, right? I'm not sure where any debate about dynamic languages started. The points you make about static vs dynamic are valid, just not relavent at the moment.

It's funny, too, how you talk about "implementing a red black tree" like it's an everyday occurrence. I'm guessing you are a teacher/researcher (in which case this entire discussion makes much more sense). On any application team I've worked with (in the valley or out), implementing a binary tree from scratch would require extreme justification and literally have to be the only way possible to solve the problem.


On the dynamic..static axis, Go is much closer to the dynamic side than to Haskell's side.

I am not a teacher or researcher, I am a practicing programmer writing code that is used by critical systems as well as ambitious projects that will (hopefully) be used by many real people.

A red black tree is just an example with invariants that everyone is likely to know, so it's a nice way to illustrate the point about the power of types. Known problems are solved problems, and unsolved problems are unknown problems -- so either my invariants' example will not speak to you because you don't know it, or you will reject it because you can just re-use a library.


This seems like a decent, helpful comment to me. Don't know why it received two vitriolic replies.


Because it's completely factually wrong. Go is not AT ALL dynamically typed.


Go is more dynamically typed than Haskell. That is, it leaves nullability to runtime. It leaves parameteric polymorphism to runtime.

Whenever there's a static type that Go cannot express (and there are plenty of those!) it is effectively dynamically typed about that property.


Right, Go doesn't have sum types for example. (Or at least, it didn't. Does it now?)


Ok, look, I understand that you spent a lot of time learning Haskell, and desperately want that time to not have been in vain. You have to back off the preaching though. The conversation you joined wasn't about Haskell. You barged in and made it about Haskell. On the way, to justify your comments, you have put forth some pretty ridiculous claims. Go being dynamically typed, Haskell being 30-50% quicker to develop in, glossing over the valid points about it being more difficult to learn, etc...

We get it. You don't like Go. Some of us do, and would rather have a productive discussion about how to take advantage of it's features and avoid the traps rather than get into a pointless debate about a language that we are highly unlikely to ever use. (after this discussion, I certainly never will)

You are giving the Haskell community a bad image with this kind of behavior, and I kindly request that you not reply to any more of my comments with anything to do with Haskell.


I don't need to "want that time not to have been in vain", I am already developing with Haskell and reaching extremely high productivity levels. What I want is for people to spend less time improving the eco systems of poorly designed languages that repeat past mistakes.

Go is more dynamically typed than Haskell. It isn't a 0/1 thing. Parameteric polymorphism is dynamically typed in Go. Nullability is dynamically typed in Go. These are huge parts of the languages.

Instead of engaging in a discussion, you're repeating mistakes in a condescending tone.

Your last part of the comment is only appropriately responded by "You are giving the Go community a bad image by pounding me with ignorance with every reply, I kindly request that you study the matter before replying further".


Ok, you've finally gone and proven that you know basically f-all about go and are just out to thump the Haskell Bible.


I can't remember the last time I implemented a data-structure in a web framework. You just use the built in dictionary/hash table and get on with life.


Well, whenever you write non-trivial code, your code is going to have invariants. These invariants can be partially tested or they can be fully type-checked. The latter is better and cheaper, when available. Haskell makes the latter available far more often, so you don't have to pay for the former. This is not just useful for data structures, but code in general.


This post kind of sums it up for me. No, I'm not thinking about invariants. We're talking about writing a web app here. Typing is pretty meaningless when 99.9% of your data is just strings.


If 99.9% of your data is "just strings", you're probably Doing It Wrong.


Again.... web app.


Example web app: http://www.yesodweb.com/blog/2012/04/yesod-js-todo

How many Strings do you see there?


I think you are vastly under-estimating the learning cost. We're talking about teams of developers, not a hobby project. A few months * multiple programmers adds up to man-years really quick.

Go is much simpler. We deployed our first production (admittedly a fairly minor piece) Go service under a week after we made the decision to start using Go.

PS: State is only shared when you make it so. Go concepts like channels make it very easy to write, _and debug_ clear, decoupled non-trivial parallel code.


If you have to pay the salaries of programmers for the next 3 years to develop some solution, and know that 10% of that time will be spent learning a new technology that will make them 20%-50% more effective for the remaining 90% of the time. Would you do it or avoid it, based on the large cost of 10% of these programmers' time?


When I have another solution that will make them 30% more effective with only a 1% learning cost, yes!


I am claiming Haskell will be 20-50% more effective than Go, not than what they're doing now :-)


Extraordinary claims require at least existant evidence.


No way in hell OCaml has "better concurrency than Go". Go is built around the concept. Not that OCaml is not an awesome language though, but that's just an outlandish statement.


You're right, I was thinking more of Haskell in the concurrency part.


Since you are replying to this in the Haskell thread - Go clearly performs FAR better than Haskell on these simple benchmarks.

So as far as I can tell, the pros of using Haskell over Go would be

1) More safety from crashes

while the Cons of Haskell would be

1) Slightly slower than Go at HTTP

2) Far, far slower than Go at database queries

3) Much steeper learning curve

4) Based on the history of ML languages, will never be popular. Go is gaining popularity quickly.

I think recommending Haskell to someone for a real project is bad advice from the above. Recommending Haskell to someone for self enrichment is another story, and I can get behind that one.


1. More safety from bugs in general, not just crashes. Haskell programs will generally be more reliable than Go programs.

2. If you look at the Benchmarks game, you will actually see that Haskell beats go there.

I think this benchmark is far narrower than the Benchmarks game and is thus less representative.

3. shouldn't really be a serious consideration for a career programmer (weeks to a couple months of learning to be more effective for years to come is a no-brainer).

4. is untrue, Haskell has already gained more popularity than the ML languages had in the past. And so is F#.


how about http://robfig.github.io/revel/ ? I just skimmed thru their manual and it seems quite promising


I was just looking at Revel too. But from the homepage:

Development Status: Early adopters only. Pull requests welcome. Development is closing in on the "final" 1.0 design, but the rate of change is still high. Expect to get your hands dirty.

Wouldn't quite appeal for people looking for a Go based Rails yet, but I would have to agree.. it does look very promising.


Revel is pretty great so far (I actually just made my first contribution yesterday!). Some important things it needs though: support for HTTPS, better DB support (some type of ORM, although gorp is sufficient for now), HTTP auth, and better template engine (currently uses the built-in Go "template" package which is meh). Those are the major things that come to my mind.


I disagree with ORM, at least as a "need". Why go to the trouble of migrating to a much higher performance platform to just give most of that performance back?


That's actually very true. I pulled that one off of the GitHub page though. I believe its something robfig wants (at least optionally) for when it's deemed production-ready.


Check out https://github.com/jasondelponte/gokart - Rails style development with go. "The gokart gem by default combines SASS, Coffee Script, Rake, and Sprockets with Go to provide a great development environment. This environment supports test driven development (TDD) with ruby guard and jasmine. There are no external dependencies other than Go and ruby (which I expect you already have...)"


How are certain languages so much faster even when pulling from the database? Surely the database would be the bottleneck here? Surely if it's something cool the driver is doing, that could be implemented in other drivers?

Also, why are some frameworks on there a bunch of times (e.g. ASP MVC)?


> Surely the database would be the bottleneck here?

The idea that "it doesn't matter which language you use because DB would be your bottleneck" might have been true 10 years ago but it's definitely not true anymore.

Latest versions of MySQL and Postgresql are extremely fast. They are written in C/C++ and heavily optimized. MySQL serving data from RAM can easily do 500K+ rps on a commodity server. Even if you need to go to disks there are $200 SSD's that give you almost 100K IOPS. Put a few of them in a RAID and you're getting hundreds of thousands of IOPS for very little money.

In all likelihood application code written in Ruby/Python/PHP with frameworks piled on top of abstractions is going to be orders of magnitude slower than the DB server written in C that has undergone 15+ years of heavy optimizations.


Ok, but say we use vanilla Python WSGI with a psycopg2 (C Python PostgreSQL extension) directly. Will this come close to the performance of the C++ framework thing?


I'd say a vanilla Python application without ORM would perform similar to a flask app without ORM. According to http://www.techempower.com/benchmarks/#section=data-r6&hw=i7...

flask (on Gunicorn): 6,138

flask (on PyPy): 8,167

Django + ORM : 4,026

cpoll-cppsp : 114,711

So while a light weight Python app without ORM might be 50% faster than a full-blown Django app, it's still almost 20 times slower than a C++ app.

That shouldn't be surprising, by all accounts Python is at least an order of magnitude slower than C++.

You could speed up your Python program by running PyPy. According to the benchmark that gives it a 30% speed boost, which is in line with my experience. Another way is to deploy on uWSGI or Meinheld instead of Gunicorn.

No matter what you do, Python will not be close to the speed of C++. But what really matters is: is it fast enough for your need?


i would really like to see how CFML engines (Railo, Adobe ColdFusion) and frameworks (FW/1, CFWheels, ColdBox) stack up in this comparison.

FD: I contribute to CFWheels


We'd love to receive these test cases as pull requests if you have the time to contribute them. :)

https://github.com/TechEmpower/FrameworkBenchmarks


checking out the documentation now. is there a google group or some place I can post questions to if I need help?



I would love to see how the Goliath async server (Ruby version of Node.js is one way of looking at it - http://goliath.io) performs, as I've recently started looking into it for a part of a project.


I was just about to post this. And will the next round be tested on Rails 4.0?


The best way to ensure Goliath is tested is to submit a pull request.

We'd rather test Rails 4 than obsolete versions of Rails: https://github.com/TechEmpower/FrameworkBenchmarks/issues/35...


The play-slick results seem to be missing. Was there some sort of problem?


I had not named it uniquely. I've changed its name to "play-slick." Thanks for pointing that out!


Thanks for updating that. I noticed that the Slick version seems to outperform the regular Anorm version in most tests. Do you have any insight into why that might be? I figured it would've been the other way around.


Can anyone explain why the "gemini" framework with an ORM outperforms raw servlets/queries in almost all the cases?

What kind of optimizations are done ?


Evab, Gemini is our in-house framework [1] that we've included in these tests for our own learning experience. We're not obsessed with performance, but we do try to avoid problems that are easily avoided. :) For instance, where possible, we have used Java's concurrent data structures and reduced the use of locks.

A difference in these tests is that we have our own connection pool. Speed was not a main objective when we wrote it--it's so old that if my memory is correct, it predates the availability of the connection pool that now ships with the MySQL driver, which is what we've used in the Servlet tests. Nevertheless, my conjecture is that perhaps it's just a tad quicker at selecting an idle connection. Incidentally, for a while in an earlier round, the Go guys had an alternate test implementation that used a concurrent queue for connections and it performed fantastically. We've been toying with changing Gemini to do the same at some point.

For the benchmarks project, we classified ORMs as "raw" (no ORM), micro, or full. In Gemini, we have a micro ORM. The following are examples of its use in the test code [2]:

    // Get all rows from Fortunes table.
    final List<Fortune> fortunes = store.list(Fortune.class);

    // Get a random World row.
    worlds[i] = store.get(World.class, random.nextInt(DB_ROWS) + 1);

    // Run a batch of updates from a List of entity objects.
    store.putAll(Arrays.asList(worlds));
"Micro ORM" is loosely defined, but we've applied it to ORM options that offer some abstraction over plain old SQL, but aren't as comprehensive as the standard-bearers. For example, while we have a data structure that represents relationships, we do not have a higher-level query language that traverses relationships automatically nor any similar mechanism for easy relationship traversal. It turns out that across all of these frameworks, there are dozens of what we are calling Micro ORMs.

Getting back to your question, the Gemini ORM does leverage a few tricks for performance such as ReflectASM for object creation and prepared statements for queries. But there isn't a whole lot of fancy work going on.

It's not glamorous, but the most important high-performance elements we make use of in Gemini are the JVM platform and Java's concurrent data structures. As demonstrated by the likes of Undertow (the web server component of JBoss WildFly), mind-boggling performance is available on the JVM.

[1] https://groups.google.com/d/msg/framework-benchmarks/p3PbUTg...

[2] https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...


Does anyone have more info on "cpoll-cppsp"? Hearing about it for the first time, it seems to be smoking all the other frameworks.


I was actually just checking it out for myself.

http://sourceforge.net/projects/cpollcppsp/

https://github.com/xaxaxa/workspace/

Seems to be a C++ framework with dynamics in the form of C++ code using an ASP style for embedding into the resource.


i don't see any mention of uwsgi or gevent in these benchmarks. Deploying Python apps without properly configuring the container/webserver will skew the results considerably (probably applicable to a couple of other entries as well).


cpoll-cppsp made huge performance gains from round 5 to 6. For example multiple queries went from 1,872 rps in round 5 to 7,252 in round 6. What caused that?

Also what happened to cpoll-cppsp in the plaintext test? Its performance there took a deep dive.


I know that the contributor of those tests made several changes. We weren't able to identify why it fails dramatically on the plaintext test, but I suspect it's a combination of the two unique characteristics of that test: the use of HTTP pipelining and higher client-side concurrency. My guess is that the pipelining in particular is causing problems.


I really hope you guys start doing more in-depth high-concurrency testing. This is a huge risk / problem with many frameworks and kits. It seems to be a blackhole in your tests for some reason, which is a shame because the are so well done in other regards.

Having a framework which falls down horribly at high concurrency is something people need to know about -- in how they structure and design applications and how they structure deploys. It can also help them pick frameworks that fit them better (some frameworks will do great at high concurrency, some won't).

Can't wait till you start adding high concurrency (which should NOT require any code changes from the implementation providers, you add a zeroes to the concurrency.. 100, 1000, 10000 tests).


Is there any reason why it is so slow on plain text?


Looks like there was a problem in cpoll-cppsp: https://github.com/TechEmpower/FrameworkBenchmarks/pull/364

This will be merged in for Round 7.


Nice, Some solutions are quite fast , but as soon as the database needs to perform heavy stuffs , well the perfs decrease dramatically. The really bottleneck seems to be the database in most cases. However ,some frameworks are basically quite slow.


What about Cobol on COGS?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: