It's worth noting that it's rarely necessary or desirable to put an app server like nginx in front of Go HTTP server applications. The Go standard library http and TLS stack are production quality and rock solid. Putting something in front is mostly cargo culting from people more used to the worlds of PHP/Python/Ruby/etc.
Pushing back on this a bit...for example, securely exposing a JSON endpoint to the public internet requires extra machinery that applications like nginx bring for free. If you simply set the router to your handler, then you accept arbitrarily large request sizes, wide open for DoS attacks. You have to either manually add limits or pull in some library. nginx caps these by default. Want throttling or load balancing? Again, things that haproxy and nginx do well, but require more cruft in your application.
I would argue that all is part of security-aware software engineering. If you aren't thinking of these things you have no business writing publicly-exposed HTTP applications.
Secure software isn't useful? Insecure software isn't eventually value-destroying?
Really what this sub-thread is arguing is that security Isn't My Job(TM) as application developer. I disagree. Furthermore telling app devs not to worry about it because nginx takes care of everything is a false security blanket that will bite you eventually.
Not accepting unbound input and sane rate-limiting are kind of basic stuff, no? I'm not saying every app developer needs to be a Defcon wizard, just that they should have some fundamental awareness of secure coding standards for web apps if that's what they're building.
> Insecure software isn't eventually value-destroying?
Nowhere in this sub-thread is anyone suggesting otherwise.
> Furthermore telling app devs not to worry about it because nginx takes care of everything is a false security blanket that will bite you eventually.
Nobody said this. But while we're on the topic the more likely false security blanket comes from telling app devs "just use 'net/http' and 'crypto/tls' and everything will be fine without a reverse proxy."
In any case the straw men you've raised are distracting and not driving the conversation forward.
> > Furthermore telling app devs not to worry about it because nginx takes care of everything is a false security blanket that will bite you eventually.
> Nobody said this.
That seems dishonest to say... From the grandparent:
> Or... you spend your time building something useful, leveraging skills you do have, and let nginx leverage its own strengths.
Really sounds like at least one person in this thread is advocating for app devs not to worry about things that nginx takes care of.
Agree that making straw men doesn't help. There's advice on either side regarding which one to use and realistically both are equally 'false security blankets'. The correct answer is to educate yourself on the benefit and drawbacks of each and make a conscious decision about where to implement your security.
What if I have an application that needs to be deployed internally and externally in separate instances. Identical application, but different security contexts. Using Nginx to handle these concerns is easy.
It's a common myth that internal networks are a more secure environment. You are better off implementing the philosophy behind something like Google's BeyondCorp¹ effort.
I find it useful for filtering and caching. Things like redirecting traffic to /.well-known/acme-challenge/ to your certificate management host, providing an endpoint for machine status or filtering requests to dot-files. Or telling Nginx to cache responses and allow it to serve stale content when the backend server returns 4xx/5xx status codes during deploys or high load. Handling things like client certificate authentication in Nginx instead of doing it in every backend application is another thing I've found useful.
It's useful to put Varnish in front of the app server for caching and to serve static content from a separate process (and domain) running a light/tiny httpd server instead of Apache/Nginx
I don't use Go, but D (dlang), vibe.d, varnish and lighttpd are working real well for my latest venture.
It could be that nginx is more efficient at static file serving, but that'd be down to being specifically designed and optimised for it rather than some "sync vs async" thing.
Minor quibble, in the context of serving static files (ie. from disk), go doesn't use async I/O, the file I/O blocks the thread until it's complete. But since go's scheduler is M:N this doesn't lock up the whole program, so your point stands.
Err, no, this is a misconception. All IO in Go is async - there is no sync IO in Go (as sync IO would block an entire OS thread). There is an internal registry mapping blocked file descriptors to goroutines - when a kernel IO function returns EAGAIN, the goroutine throws the file descriptor + goroutine info onto the registry and yields back to the scheduler. The scheduler occasionally checks all descriptors on the registry to mark goroutines that were waiting on IO as being alive. The scheduler is, therefore, essentially a multithreaded variation on a standard "event loop" - the only difference is that "callbacks" (continuations of a goroutine) can be run on any of M threads rather than just one.
From a Go programmer's perspective, this looks like "blocking a thread", but because goroutines are relatively lightweight in comparison to actual threads, it behaves similarly resource-wise to callback-based async IO. (Although yes, nginx is likely optimised so that it throws out data earlier than Go can free stack space and so can save some memory. Exactly how much is up to benchmarking to find out.)
Basically, the only differences between Go and e.g. a libev-based application as far as IO is concerned is a different syntax - the event loop is still there, just hidden from the programmer's point of view.
Note that this doesn't mean you shouldn't put nginx in front of Go to serve static files - nginx is likely more optimised for the job than Go's file server, might handle client bugs a little better, is more easily configurable (e.g. you can enable a lightweight file cache in just a few settings), you don't have to mess around with capabilities to get your application listening on port 80 as a non-root user, and so on and so forth.
I'm referring specifically to disk IO, which on linux using standard read(2) and write(2) is (almost) always blocking. What you describe is true of socket fds and some other things, but on most systems a file read/write which goes to a real disk will never return EAGAIN.
This is why systems like aio[1] exist, though afaik most systems tend to solve this with a thread pool rather than aio, which can be very complicated to use properly.
Ah, absolutely, I forgot that the state of disk IO on Linux is terrible - although this still isn't quite the case, since there's a network socket involved in copying from disk to socket, so if the socket's buffer becomes full the scheduler will run.
It seems that nginx can use thread pools to offload disk IO, although doesn't unless configured to - by default disk IO will block the worker process. And FreeBSD seems to have a slightly better AIO system it can use, too.
I love Warp for Haskell, but I would still be hesitant to expose it directly. It’s simply not used as much as Nginx or Apache. Less people have spent time trying to break it.
Perhaps it's rarely necessary, but it is often desirable. For instance if you are serving any static content along with your application, nginx is quite handy and is probably better at compressing and caching.
Your choice is force a timeout and kill streaming requests, but defend against slow client DOS, or support streaming requests and suffer from a trivial slow client DOS.
For this and other reasons I still recommend fronting golang with something more capable on this front.