Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Gixy: Nginx Configuration Static Analyzer (github.com/yandex)
265 points by petercooper on May 11, 2017 | hide | past | favorite | 33 comments


It is unsafe to configure Nginx to use any DNS resolver other than 127.0.0.1 ("resolver x.x.x.x") because the transaction ID can be predictible, and Nginx fails to randomize the UDP source port. Gixy should check for that. Strangely the Nginx developers staunchly refuse to consider this a security flaw. See http://blog.zorinaq.com/nginx-resolver-vulns/


This is an excellent writeup (I don't know if it's yours or not). I am frankly baffled that the Nginx team refuses to fix it, since there is a working PoC and it would be almost trivial to fix.


So, what exactly is the best way to mitigate this attack vector? While informative, the post only goes into detail about the attack; not a complete solution for the original intended purpose of the resolver setting.

Is the dnsmasq solution here[1] sufficient? Or, should I edit dhclient.conf to add the desired name servers[2]

[1] https://unix.stackexchange.com/questions/128220/how-do-i-set...

[2] https://unix.stackexchange.com/questions/128220/how-do-i-set...


Using "resolver 127.0.0.1" is an effective mitigation against all issues I documented.


It is more than unsafe. It is plain stupid.

There is no resolver running on 127.0.0.1 ... except on a couple of systems. If I have to make an educated guess, the guy who wrote that is running on Ubuntu.


You don't understand. The recommendation is to INSTALL and CONFIGURE a resolver service accessible via the loopback address. Whether you run a resolver locally or forward/tunnel securely to a remote trusted resolver is irrelevant.


It takes 10 seconds to install unbound.


>It takes 10 seconds to install unbound.

And a lot longer to ensure people know to do this.


Why does an HTTP server need to have a dns resolver? What purpose does it serve?


It's only needed if you configure proxy_pass http://<domain name> ... (and probably other forms of _pass, such as fcgi_pass)

If you use a specific IP, resolver is not needed.


to reverse proxy to a domain


I haven't tried this yet, but I love the idea of it! ShellCheck [0] is a similar tool for shell scripts. I'd love something similar for common configurations like ssh servers, apache, etc.

For many of these tools there's objectively wrong configurations, where you'd only use certain settings for legacy reasons. But it's not always clear for newcomers.

[0] https://www.shellcheck.net


There are a few solutions for that. I worked on https://github.com/HewlettPackard/reconbf another popular one is https://github.com/CISOfy/lynis


Woah this is awesome, we just ran this and found that we have this[1] issue in our config. Thanks for this! I'm going to add it to our CI runs.

Edit: the add_header config option is quite confusing and I had absolutely no idea it behaved like that, I highly suggest trying this tool on your configs and seeing what it reports.

1. https://github.com/yandex/gixy/blob/master/docs/en/plugins/a...


That explains the bother I was having trying to add CORS headers today. Not the sort of thing one really wants to get wrong, either. I shall definitely be doing this tomorrow.


May I ask what's your setup so your nginx conf file is part of the CI process? Any reference you could share is welcome.


We use gitlab, here is roughly what our stage looks like:

   Run Gixy:
      image: python:3
      stage: code_check
      cache: 
         paths: 
            - .pip-cache
      script:
        - pip install gixy --cache-dir=.pip-cache
        - gixy nginx.conf


What's a typical latency increase (on a purely localhost) of adding SSL/TLS to nginx?

I've been doing some testing, and I see an increase of about 40-60ms, just for initial connection server compute. Is that normal? How does one reduce the initial SSL/TLS connection compute time?

My web app responds in 1-2ms. Adding another 40-60ms on top of for https that wrecks latency.


In addition to what elithrar is saying, take a look at some of the stuff at https://istlsfastyet.com

Things like TLS False Start, TLS Resumption, TCP Fast Open, and more can really help reduce that pretty well.

Also, take a look at the book linked at the bottom of that page which includes a pretty comprehensive section on improving RTT and TLS speeds in general. (https://hpbn.co/transport-layer-security-tls/)


a) what is that as a fraction of the total RTT in "real world" conditions (not localhost)

b) with session tickets configured, do you see this increase consistently for the same client?

c) would your users find it acceptable to have the occasional 50ms hit on a full handshake in order to get authentication & encryption? would they even notice (see 'a')


a) It's a pretty large percentage (~50%) for viewers within my target critical region (east coast) but it becomes less important when accessed worldwide.

b) Session tickets cuts it from 60ms down to about .5ms extra delay (when loadtesting with a keepalive https connections) so this is really only an initial handshake problem.

c) The localhost full handshake latency problem is really a proxy to the real problem: the CPU load. TLS/SSL is adding a lot of compute requirement to each initial connection. This becomes important as I have to deal with celebrity content, where a single Twitter link can lead to hundreds of thousands of incoming connections within a few minutes.

TLS/SSL handshake compute requirements really need to be sped up somehow..


60ms sounds like too much. How are you measuring this, and what does your TLS config look like?

A quick test with ab (-n 1 -c 1) against a nginx instance shows about 5ms for me, on an Intel E3-1245 V2 @ 3.40GHz. This is with a P-384 key, so it would be even less with P-256 or RSA-2048 (which, IIRC, have fast assembler implementations in openssl).


I'd love to figure this out and why that's the case because this happens to me too.


An option might be to not upgrade to SSL until the user logs in or interacts with the app.


Pretty cool to see a tool like this generate so much interest!

I work on NGINX Amplify and we have been playing with config analysis for a little while as a cloud-based reporting tool.

This hit my radar originally when they wrote their blog: https://habrahabr.ru/company/yandex/blog/327590/ (they referenced Amplify)


Wow, it found some issues (add_header_redefinition) right away. I'll be sure to add it to my toolchain (my nginx.conf is generated). Good find!


Seems to choke on my map complex directive. [nginx_parser] ERROR Failed to parse config ... Expected end of text (at char 148), (line:8, col:1)

the line in question: map $cookie_MULTI_SITE_3:$cookie_client_ms:$cookie_counselor_ms $magic_root {

there's a simpler map directive above this one that it doesn't seem to have a problem with


Does anything like this exist for Apache?


great project! but breaks on my nginx config (several vhosts, lot's of subdomain specific stuff)


Hi! Can you file the issue with more details about problem? I'll try to fix :)


I'll try to prepare a bug report - I didn't want to publish my whole configuration. Thanks for taking care!


I get this as:

[nginx_parser] WARNING Skip unparseable block: "http"

Which presumably means it's broken there too? :)


Yep, probably this is Gixy bug :( Can you show your http section? Maybe better via email: buglloc@yandex.ru :)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: