It's a polyglot world for IAC, and even standing before both tools in their current state I can imagine other axes of consideration that might rank as highly as the specific characteristics of the language. In our case we adopted TF several years back and being able to use a different language probably isn't going to be enough benefit to justify migrating all our things. I think in many cases people will adopt the tool that gives them the most confidence that it will do what they need, and deal with whatever language it uses.
I can agree that for basic stuff HCL cuts a little more to the core because you don’t have the programming language boilerplate around it but to me it reads the same except things like conditional and loops are a lot more natural.
Just being able to DRY your code and write tests is already worth the cost of admission for me.
It can't be DRY and modular with CDKTF though - you can't really put components into Go modules, for example, so, you still get a monolithic codebase unless you use Go as glue code and put all the IaC in Terraform modules. I think Pulumi is better in that regard and those who don't want to use a programming language can use the new Pulumi YAML.
I do use CDKTF where Terraform lacks: dynamic providers and dynamic resource names. For anything else, modular Terraform code does that and does it better, and it's more readable and works with the existing Terraform ecosystem of Terraform Cloud, linters, security scanners, drift detectors, etc.
Curious whether anyone else has run into the problem where Nginx fails to work and crashes the entire instance because a single app that it's supposed to reverse proxy doesn't resolve:
nginx: [emerg] host not found in upstream "my-app"
In my eyes, that's a massive pain when you're running something like Docker Swarm with container health checks that don't route traffic to dynamically added internal names before the actual health checks for the containers have passed successfully. This probably isn't specific just for Docker Compose but also other solutions that handle networking similarly.
The sad thing is that using variables for that proxy_pass URL doesn't work either, when you want to use "proxy_redirect default", without which some applications that you're hosting tend to fail due to weird redirects:
nginx: [emerg] "proxy_redirect default" cannot be used with "proxy_pass" directive with variables
Alas, I'm in the weird spot where I cannot use Nginx for certain deployments, say, proxying 10-20 apps on my server but wanting most of them to keep working when one fails, without resorting to something like Kubernetes for managing the ingress due to some pretty tight resource limitations (even with stuff like K3s).
In comparison, Caddy is a little bit more usable, but also not quite as battle tested. Actually, Apache2/httpd might also be a viable alternative due to mod_md adding support for Let's Encrypt built in (like Caddy, unlike Nginx which needs certbot), if not for the performance.
Then again, the code that I write is the bottleneck more often than the actual web server, except for cases where one might use mod_php instead of fpm (though if you need to work with PHP and Apache and can afford the little extra work of configuring fpm, it's not too hard to fix either).
The EFF cerbot plugin has it covered; assuming your generic Debian server with nginx already configured to host www.domain.com (trivial hello, world setup - nothing fancy) then it's:
All done. Certbot is already running as a systemd service to handle ongoing renewals and it'll now restart nginx if your cert is updated. This example uses the trivial http-01 ACME method, if you need the more complex DNS based setup for wildcards that'll take a bit more elbow grease.
FWIW, having the ACME client separate from the server has a bunch of downsides. It's less robust, can't provide OCSP stapling and automatic renewal on revocation, doesn't have issuer fallback, can't offer you On-Demand TLS, etc.
I agree with most points, but OCSP stapling is independent of ACME and thus is perfectly doable with nginx and an externally obtained let’s encrypt certificate.
That aside, for me the trade-off was different and I was willing to give up the benefits of included acme support for the benefits of running a very well-supported and well-known web server that at this point hosts most of the internet and which can run on port 80/443 without iptables hacks (not sure whether this still applies to caddy)
What I meant was using OCSP status (from stapling) to trigger reissuance on revocation. I don't think this can be done with nginx and certbot unless nginx makes its OCSP status available for the certbot client to read from, or having an event trigger in nginx somehow to get certbot to run. Either way, it's extra faff that you don't need to worry about with Caddy.
> which can run on port 80/443 without iptables hacks
Not sure what you mean. Do you mean that you need root to bind to those ports? In which case, you can give the process CAP_NET_BIND_SERVICE which lets it. Caddy's systemd service does this, and runs as a non-root user: https://github.com/caddyserver/dist/blob/2ceb535e076ed9b3083...
Hahaha, I'm a dinosaur, too. I only started considering nginx something other than "that new-fangled crap" a few years ago. Caddy may get there in about a decade.
I don't know. I haven't used Apache HTTP in a while, but Nginx seems to have a lot more foot-guns. I'm planning on giving Caddy a go on my next server upgrade.
Openresty is basically Nginx + Lua. I did some POCs on it back in the day, honestly surprised it's still around. It wasn't bad, just seemed like a product with a very small niche.
If I'm not mistaken, the biggest user of openresty is the Kubernetes community-maintained nginx ingress controller [1].
[1]: https://github.com/kubernetes/ingress-nginx I took pains to say "community-maintained" because there's also an official nginx ingress controller from F5 (current corporate owner of nginx).
I never experienced nginx' way of handling certificates as difficult and I prefer to keep my web server independent from the certificate management system since I use those certificates for more purposes than just web servers so I have not looked into the supposedly superior way caddy handles this workflow. I handle certificate requests and updates on a separate system which then distributes certificates and handles daemon restarts to where and when these are needed by using certbot hooks. All it takes to request and distribute a new certificate is a single command on that certificate management system, the rest follows automagically. Am I missing something by doing things this way?
Disclaimer: I am one of the authors of the project.
I do wish that NGINX made LetsEncrypt as easy as to use as Caddy does. We are all big fans of LetsEncrypt and are quite happy to see NGINX donating to the project.
In this project (MARA), LetsEncrypt support is integrated via [Cert Manager](https://cert-manager.io/) for Kubernetes. This is nice because it supports certs from a variety of issuers like AWS, Google, Vault, Cloudflare, etc in addition to Let's Encrypt.
Source? Any content on memory safety concerns in "C" servers or problems that lead to catastrophic results in production? I'm interested in reading about it.
Uh, google “heartbleed bug”, that’s just one example of a massive and catastrophic result of lack of memory safety in C. It probably costed something on the order of $1B for remediation efforts globally.
Heartbleed was in openssl. I didn't ask about memory safety in C. Caddy author was pretty accurate with his statement, I asked for proof. Heartbleed is not the only memory leak, there's plenty, C's been around like software written in it. I'm aware of shortcomings.
Sorry, I don’t get understand your response. Most Caddy’s competitors use OpenSSL, and so are vulnerable to bugs in it. A lot of those bugs, like for example heartbleed, are only possible due to the nature of the C language. Those kinds of bugs, the memory safety bugs, are prevented when using memory safe languages like Go. These bugs are real and serious, and collectively cost billions of dollars in damages.
There's not a single proof except heartbleed in OpenSSL. I merely asked the author to provide the proof, not for you and I to engage in Go's memory safety and billions in damages and inherent nature of C and so on.
If we were to sum all the "damages" caused by faulty software, we'd arrive at a number that exceeds the total sum of money on planet Earth, let's not use that false metric for this discussion.
Is there an actual problem right now with nginx that Caddy circumvents with its architecture? Yes or no? That's the question.
There are several large corporations using Caddy. Several companies you've heard of and probably used the products of! I have a call with one of them every few weeks.
I also know a huge hospital chain in the US is using Caddy.
Caddy has problems with streaming gRPC (not simple request/response). So does Traefik to my understanding but Traefik might work better if reports are to be believed. Nginx has support i think but ive not verified it. I like caddy simple config when it works.
None of the proxies seem to do well with bidirectional gRPC streams as they just treat gRPC as a h2 proxy but I'd love to see that proven wrong.
Caddy is capable of handling bidirectional gRPC streams! I have just tested it, and it works just fine. Caddy will immediately flush writes when upstream is `h2` or `h2c`[0] instead of having to wait until reading from socket is complete
I recall some licensing fuss with Caddy but it looks like it's actually open source now. But stuff like this does tarnish a brand for a long time, see e.g. people still being uncertain about Qt licensing, not because of the more recent developments, but because of licensing issues that were settled in the 90s.
I used to use Caddy in the v1 days, but after some really unfortunately licensing decisions (which I think were reverted ultimately), I felt burned and stopped using it. It was my go-to for static site serving containers for a while, but because some of the time I need to make what they considered "commercial" ones (work related), I couldn't any longer. And it was handled really poorly in my recollection, with some random sponsor header injection. I decided that nginx on a slim distro container is more than fine most of the time.
Maybe it's just me, but the Caddy config file (either the docs or the format) isn't great. It took me way too long to turn on a simple feature (didn't work, no errors). That plus having to relearn the config going from v1 -> v2 turned me off to it.
Biggest reason is that debian's packaging requirements are way too much of a burden for us to conform to, especially for Go apps. And they update way too slow for us to be comfortable with.
In the context of kubernetes ingress caddy isn't really there yet, at least compared to nginx (which has two major k8s ingress implementations, one maintained by the core k8s devs themselves).
That is true, unfortunately. We, the core maintainers, don't use k8s ourselves, so we need to defer to the community for help. See https://github.com/caddyserver/ingress
Yeah the whole k8s ingress world is a moving target that has changed substantially even in the last year or two. I don't blame you for not spending a ton of time on it. Caddy is still an awesome option to run inside a pod that needs to have a simple static file server, do some reverse proxying to services in the pod, etc.
From my limited and possibly outdated experience (about a year since I last looked), caddy's documentation was lacking a bit when compared to nginx.
Even worse than that, any time I tried to google to better understand how to do something with caddy, every result returned how to achieve exactly that with nginx; and would rarely find the way to do it with caddy.
But it's been about a year since that, unsure if things have changed.
> caddy's documentation was lacking a bit when compared to nginx
Could you be more specific? We keep hearing vague comments like this about our docs, but without feedback pointing to specific issues, we can't improve them.
I suggest that next time you have trouble finding the information you're looking for, reach out to us and let us know. Open a topic on the forums, we'll be glad to help you find what you're looking for. And if you do so and point out a lack in our docs, we'll know where to focus our efforts.
I was at the time trying to setup jupyter to work with a reverse proxy such that I could access jupyter through zerotier.
Sorry this is all probably a messy set of words that don’t all make sense together with more details. I’ll commit to next time that I’m trying this or anything else, and I hit a wall, I’ll ask.
Sure, but it'll never reach maturity if you don't try it! We're always looking for more people to test things out, give their thoughts, and contribute ideas or fixes.
I think that is the problematic point with Caddy right now - if you are using it, you are actually a beta tester. This is a problem for sites with lots of traffic because things start to be different with servers maxed out - Nginx is battle hardened, learned it´s lessons and you can be sure that it works under extreme load as expected.
Also you definitely need rate limiting out of the box, not as some beta plugin that needs to be fiddled in manually.
Also streaming is something that needs really good testing with high loads.
K8s is important for many people, too.
Of course it is not easy to find someone willing to test on a high traffic site when there are well established alternatives since many years and no actual problem needs to be solved.
"Easy configuration" is not very important for admins of high traffic sites. Not easy to find what could be the selling point for Caddy, but it should deliver one or two special things more.
We may eventually do that, but we need evidence that it's actually useful for people, since it adds to the long-term maintenance burden if we bundle it.
I look through my server logs sometimes and often see connection and request throttling being activated, by those who are hitting the servers with their automated exploit tools. I see having some sort of throttling as essential as being able to set the max client body file size to prevent abuse.
Of course, I could run something else in front of the server, but it's simpler for my use cases not to. The auto-SSL and simplified configuration were very attractive things about Caddy, however.
If your CMS provides it, maybe you could run an exit poll for people who demo Caddy, then look elsewhere, as I'm just speaking from my own experience.
Understood, I'm not saying it's not generally useful, I'm specifically talking about this implementation; is the way it was built useful, does it actually solve user's rate limiting needs? We need people to try to the plugin so we can mould it to user's needs. Then later we can bundle it once we're sure it's designed properly and we're locked-in on the approach taken.
However I wonder if it would work better if the module were brought under the Caddy namespace and marked "beta".
At least then, there would be a show of commitment for the idea, if not the implementation.
Recently I've been looking at the Symfony framework, and this is how they introduce features, with a clear warning that the API etc. is subject to change.
Understood, but we don't want to ship something half-baked in the standard distribution. So we offer it as a plugin first so it can be worked on and polished without being tied to Caddy's versioning. Please try it out and give specific feedback on the plugin. We need to hear if there's any functionality missing or if the design doesn't fit user's needs before we can bundle it.
Idk but for most projects that don't go down the apache foundation or GNU path funding via either support (Tidelift, consulting...) or non-oss extended versions (NGINX has NGINX+) is normal. Passion is nice but money for a sustainable livelihood has got to come from somewhere with donations often being insufficient.
Seeing you're someone capable of using a web server, it implies you're beyond capable of finding why Caddy over nginx and vice versa, which brings up the question: why didn't you?
This kind of response is rude and unhelpful. This is how they are finding it out. Just googling "X vs Y" invariably gives you worthless SEO spam these days. A forum like HN, where you can hear about the experiences of other people in the field, is a great way of getting more useful information.
Thank you for the response and the judgement, but we obviously consider the word "rude" to mean something else.
I was not offensive or ill-mannered, I merely wondered why a person, that's apparently capable of doing minimal research, does not do so before asking for opinion of people on HN in regards to software details. Briefly judging GitHub activity is not the way to make conclusions, which was the only detail being covered.
It's easy to label someone or something rude, I can't control how you read the text I write and what the imaginary tone of voice is. Great Robin Hood display, I love the fact we all get to be judge and jury who decide what's good and what's bad, often missing the real intent.
As an IaC tool, can anyone speak of how it fits in the landscape compared to Chef, Puppet, Ansible, SaltStack and, oh, Terraform?
[1] https://www.pulumi.com