It is unsafe to configure Nginx to use any DNS resolver other than 127.0.0.1 ("resolver x.x.x.x") because the transaction ID can be predictible, and Nginx fails to randomize the UDP source port. Gixy should check for that. Strangely the Nginx developers staunchly refuse to consider this a security flaw. See http://blog.zorinaq.com/nginx-resolver-vulns/
This is an excellent writeup (I don't know if it's yours or not). I am frankly baffled that the Nginx team refuses to fix it, since there is a working PoC and it would be almost trivial to fix.
So, what exactly is the best way to mitigate this attack vector? While informative, the post only goes into detail about the attack; not a complete solution for the original intended purpose of the resolver setting.
Is the dnsmasq solution here[1] sufficient? Or, should I edit dhclient.conf to add the desired name servers[2]
There is no resolver running on 127.0.0.1 ... except on a couple of systems. If I have to make an educated guess, the guy who wrote that is running on Ubuntu.
You don't understand. The recommendation is to INSTALL and CONFIGURE a resolver service accessible via the loopback address. Whether you run a resolver locally or forward/tunnel securely to a remote trusted resolver is irrelevant.
I haven't tried this yet, but I love the idea of it! ShellCheck [0] is a similar tool for shell scripts. I'd love something similar for common configurations like ssh servers, apache, etc.
For many of these tools there's objectively wrong configurations, where you'd only use certain settings for legacy reasons. But it's not always clear for newcomers.
Woah this is awesome, we just ran this and found that we have this[1] issue in our config. Thanks for this! I'm going to add it to our CI runs.
Edit: the add_header config option is quite confusing and I had absolutely no idea it behaved like that, I highly suggest trying this tool on your configs and seeing what it reports.
That explains the bother I was having trying to add CORS headers today. Not the sort of thing one really wants to get wrong, either. I shall definitely be doing this tomorrow.
What's a typical latency increase (on a purely localhost) of adding SSL/TLS to nginx?
I've been doing some testing, and I see an increase of about 40-60ms, just for initial connection server compute. Is that normal? How does one reduce the initial SSL/TLS connection compute time?
My web app responds in 1-2ms. Adding another 40-60ms on top of for https that wrecks latency.
In addition to what elithrar is saying, take a look at some of the stuff at https://istlsfastyet.com
Things like TLS False Start, TLS Resumption, TCP Fast Open, and more can really help reduce that pretty well.
Also, take a look at the book linked at the bottom of that page which includes a pretty comprehensive section on improving RTT and TLS speeds in general.
(https://hpbn.co/transport-layer-security-tls/)
a) what is that as a fraction of the total RTT in "real world" conditions (not localhost)
b) with session tickets configured, do you see this increase consistently for the same client?
c) would your users find it acceptable to have the occasional 50ms hit on a full handshake in order to get authentication & encryption? would they even notice (see 'a')
a) It's a pretty large percentage (~50%) for viewers within my target critical region (east coast) but it becomes less important when accessed worldwide.
b) Session tickets cuts it from 60ms down to about .5ms extra delay (when loadtesting with a keepalive https connections) so this is really only an initial handshake problem.
c) The localhost full handshake latency problem is really a proxy to the real problem: the CPU load. TLS/SSL is adding a lot of compute requirement to each initial connection. This becomes important as I have to deal with celebrity content, where a single Twitter link can lead to hundreds of thousands of incoming connections within a few minutes.
TLS/SSL handshake compute requirements really need to be sped up somehow..
60ms sounds like too much. How are you measuring this, and what does your TLS config look like?
A quick test with ab (-n 1 -c 1) against a nginx instance shows about 5ms for me, on an Intel E3-1245 V2 @ 3.40GHz. This is with a P-384 key, so it would be even less with P-256 or RSA-2048 (which, IIRC, have fast assembler implementations in openssl).