Hacker Newsnew | past | comments | ask | show | jobs | submit | kt315_'s commentslogin

While no 503s were observed in the test, there were timeouts. I updated the article with corresponding graph.


None of the tested webservers were returning 503.


Sleeping is done with setTimeout:

https://gitlab.com/stressgrid/dummies/blob/b342b02407ce09cec...

This isn't busy wait. Instead it yields for specified time period, much like network request to the backend database would do.


We are preparing new benchmark test for major platforms. Among other suggestions it will include memory consumption.


Author here. We tried number of acceptors that was 4x and 16x number of cores without any difference.


What is the advantage over Gun? (https://ninenines.eu/docs/en/gun/1.3/manual/)



I think Gun has very similar philosophy, e.g. it is connection oriented and does not impose pooling and such.


Gun starts a process for each connection. The main idea for Mint was to not do this so that users can choose their own process structure.


With Gun it is also possible to create a specialized process structure albeit with more heavyweight processes behind each connection. Mint provides lightweight (possibly more efficient?) abstraction to achieve similar results. Thanks Eric and Andrea!


What would be the good Java web server to test?


Good point, will add.


many thanks!


Author here. Planning to run the same test using cluster module with one worker per CPU.

What would be the most performant way to serve HTTP in Go?


I'd avoid the cluster module, its not the recommended way to scale Node.js. It exists mostly as a way to make naive Node.js benchmarks perform better in multi-CPU comparison testing... like yours! :-) Many cloud providers charge per-CPU, so node's "mostly not threaded approach" is reasonable, IMO and that of many other users. Node is scaled by spinning up more instances, each instance being one of the cheapest "one CPU" variety. I'd be more interested in seeing a version of your benchmark that limited each of the language instances to a single CPU, and/or that spun up multiple node instances equivalent to one multi-cpu instance of go/elixir --- this latter may sound weird, but its a "cost equivalent" comparison, which is ultimately what's important: transactions served per $$.


One of the benchmark goals was to test "scheduling" efficiency of each runtime. In other words to show how well it scales given many-core instance, which often is more economical in transaction/$$ sense.

Question: how using cluster module is different from spinning up multiple node instances?


FastHTTP is the fastest server for Go, it's going to crush Elixir by something like 10x, it's really fast like the fastest c++ / rust libraries.



Well I believe few people use the built in http module in go. Not sure if you're testing would allow for 3rd party frameworks but the Iris framework loves to claim fastest go web server. Gorilla or gin are also popular.

As for the structure of it you would like have everything split out into goroutines with a worker pool of goroutines ready to ferry the data from request to backend and back to client.


There are so many options to choose from. But don't use Iris.

See https://www.reddit.com/r/golang/comments/57w79c/why_you_real...


Why would you not use the standard library’s http package. I would!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: