Not really. If someone logins as user A on the machine, and caddy runs as user B, then unless A has sudo access, A cannot modify caddy. But with this admin HTTP endpoint, user A now can arbitrarily modify caddy.
That's true, but I think if your production web server is running on a system that you expect to have other users log into and do things on while having the Unix permissions prevent them from interfering with the production server, then your whole architecture and process is deeply broken far beyond the ability of any Caddy design decisions to address.
Most people would expect that `sudo` and `curl localhost:2019` are very different permissions, that is, curl with post payload `-d '{"admin":{"remote":{"listen":"0.0.0.0:2019"}}}'`, and you'd only have to convince an existing process to make the request.
SSRF in an application is a serious issue to have on its own, that's true, but in combination with a Caddy admin endpoint it can be used to give an attacker full access to your local network.
You could have a blind SSRF vulnerability in an application and while that's not great, it is difficult for an attacker to exploit successfully.
If the attacker knows or guesses you're hosting Caddy on the same machine, they know you most likely have an admin interface on localhost:2019 that they can use to make further local network requests and also makes it possible for them to access the results of their local network requests they were making through the blind SSRF vulnerability hypothesised above.
> Basically, if you're not using it (and you shouldn't be using such functionality on a production machine), then you don't need it and should disable it
Actually, most everyone wants zero-downtime config reloads. The API is necessary to perform config reloads.
As others have said, you may use a unix socket instead for the admin endpoint. And see https://news.ycombinator.com/item?id=37482096, we plan to make that the default in certain distributions.
Of course it isn't. It could reload the config from the same path it loaded the config from in the first place. Like practically all other software has done for decades.
The source of a config doesn't necessarily need to be from a config file. Config loading is abstracted. So it requires input, and signals provide no way to pass arguments, so it's not workable. See https://github.com/caddyserver/caddy/issues/3967
This sounds like a design decision you've made, not an inherent limitation. You can read config from files, like practically all other software has done for decades.
I get the thing about config reloads, I don't think it's worth it due to the security risks of the current default, but I get it.
Happy to hear you're moving to sockets by default on *nix!
However, I'd like to point out that the default should be in the binary, not in the distros default environment variables, otherwise it won't reach people who build their own binary, and depending on how you start your Caddy server you may clear environment variables for that process, and end up with the insecure HTTP-based admin endpoint enabled by accident.
The only default that works on all platforms is a TCP socket. We can't write to unix socket file by default because the path to the socket needs to be writable, and there's no single default that has any guarantee to work. So it needs to be dictated by config in some way or another. It's better for it to actually work by default than possibly not work because of a bad default.
So detect the OS and choose the more secure default where possible? I know it's less elegant, but having a much more secure model is worth some sacrifices.
Then you throw an error in the log, you have to leave something for the admin to do to set their system up correctly. It's better that Caddy fails to enable the admin endpoint than that it enables it in an insecure manner.
You're overestimating the users; a large % of them would not understand how to resolve that on their own and would complain to us that they can't start Caddy without errors. And I fundamentally disagree that the TCP socket is so insecure that it must never be used as a default, it's only insecure if your server is otherwise compromised. It's a sufficient default for 99.99% of users.
Said large percentage of users will be installing through a package manager anyway, where you can make sure that Caddy has a path the user it runs as can write to.
If you're correct that I'm overestimating users then what are you guys doing? You're expecting users to know how to secure their Caddy configuration when in reality most users probably have no idea that this API even exists, they'll put their config in Caddyfile, start the server, and be done with it.
We should be expecting that they don't know anything about the risks involved with leaving an unauthenticated HTTP API on localhost, and instead shipping a default that doesn't place their system and network at unnecessary risk.
> Said large percentage of users will be installing through a package manager anyway
Exactly, which is why the environment variable approach is perfectly fine. The env var will be set in the systemd config.
> You're expecting users to know how to secure their systems
Again, our view is that the TCP socket for admin is secure enough for 99.99% of users, and has been for over 3 years since Caddy v2 was released. We've still not seen any evidence of a practical exploit in the wild.
You should disable it if you don't need it or at least move it behind authentication if you do need it.
Security follows the Swiss cheese model: each individual measure has known limitations but by layering them, you reduce the overall number of attack vectors.
Getting the server to make arbitrary HTTP requests is bad, yes, but limiting what the attacker can do with that makes it less dangerous if you somehow screw that one thing up.
I'd imagine if someone already has local access to the server, it's already too late.