Hacker Newsnew | past | comments | ask | show | jobs | submit | krausest's commentslogin

Feel free to post any issues to the github repo and I'll try to help. Building is indeed not trivial, since there are so many languages and tools and thus builiding is currently only fully supported on linux (rather a docker linux build). With a few exceptions building on windows and OSX works. A simple workaround is to delete the folders of those implementations that cause errors.


The startup benchmark tries to capture that, but doesn‘t simulate network performance. Still I think it shows some interesting results.


Is your question what is keyed (answered in the blog) or what‘s vanillajs (javascript without a framework)?


What is "keyed Vanilla-js". As I understand the definition of keyed, it's a feature that js doesn't have.

EDIT: from the blog:

"All modern frameworks have some kind of binding between data and DOM nodes. This binding is especially interesting when data consists not of a single item, but of a list of items. If data changes the question is which DOM nodes should be updated, inserted or removed."

It's unclear because it seems to talk about data binding (which is not part of js), but actually is talking about an identity association between a DOM node and a data object. I guess it just means there is a one-one association between nodes and data points, and when a data point [identity] changes, the corresponding DOM node is entirely destroyed and recreated.


That would be great. Just send me a pull request!


Well I got it completing and its taking about 35% longer than the 'original' method. I do expect index lookups to be faster than property lookups so believe there must be some inefficiency in my hasty refactoring.

Or there might be an issue with there being only 5 bodies in the test. Im a bit puzzled.

Ill send the code later in the day.


I also hoped to see a bigger improvement, but it turned out that Javascript is already very fast for nbody. Maybe I should have picked a well known numeric benchmark where Javascript is still far behind - any suggestions for that? Or are the Javascript VMs already too good for numeric benchmarks?


You can try increasing `N` in the nbody problem, and also measuring the memory overhead.

You could also try one of the hashing / crypto algorithms in JS. They should involve a lot of integer arithmetic that should make WebAssembly stand out.

More tips:

* for performance measurement, setup the benchmark so that JS run takes atleast 30 seconds. (Increase N, etc)

* close all other applications and tabs

* If you have linux, set the CPU governor to performance

* Measure the CPU temperature and make sure you let the CPU cool down between runs. In modern CPUs, the cores get throttled automatically when they reach a certain temperature.


I didn't. I believe that is's better to run a few times and take the best run. I payed for the turbo mode CPU and I'd like to know the performance on my machine. There's a lot going on my machine, and even more when a browser is running. The only thing (besides running on a clean machine) one can do about it is to measure multiple runs. Three runs are maybe a bit to little for scientific results, but I considered them good enough for a casual benchmark.


http://benchmarksgame.alioth.debian.org/for-programming-lang...

Those charts show 300 runs of Java n-body #3.


I added riot 2.5, but I need help. Can you assist? https://github.com/krausest/js-framework-benchmark/issues/13


You're basically right. Thus we've decided to perform 10 runs for each benchmark and drop the worst 4. This eliminates cases where the gc or a background system process causes a slow run. In consequence this also strongly reduces variance, but is blind on the gc eye. It would be interesting to include gc, but I have no idea how to do it in a real good way.

The results are comparable (but nowhere equal) between runs and the difference between the frameworks is in most cases large enough that the ranking stays consistent (but you wouldn't prefer preact to cycle.js v7 for having a 0.01 better result in the average slow down, would you? For ember, mithril and cycle.js v6 performance might be something to consider depending on your use case.).

If you look at the console output this is just an approximation, the real measurement is performed on data from chrome's timeline using selenium. The console measurement uses a timer to guess when rendering was performed. Of course this is more inaccurate than using the timeline.


I think an equally interesting result would be to keep only the worst 1-4 benchmark results and compare those. In my experience some frameworks put much more long-term memory pressure on the GC, and by discarding those, your benchmark turns a blind eye on that.

In another words, I'd be interested more on the worst possible performance of a given framework, than their best (assuming the app is written in a reasonably good way otherwise).

Also, +1 to the sibling post that asks for longer runs, that would solve the GC question by including it altogether!


Do the benchmarks take a long time? Why not as many runs as necessary to give you a tight confidence interval?


Running all the benchmarks takes a few hours.

I'll keep that in mind for the next round. Let's see if that works nicely. Would you still drop outliers?


Thanks for the explanation! That sounds like as good a methodology as any to me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: