There should be no need to do this if you have a properly configured public/private key auth setup and disable password based login. And of course keep up to date on openssh patches and security advisories. I worry that something like this will provide a false sense of security for people who might ignore other more common-sense, fundamental precautions first.
Before doing something like this I would worry a lot more about client endpoint security (exactly to what level do you fully trust all the people and workstations/laptops that are authorized to ssh to this thing?), as an overall more likely threat.
There are also lots of less esoteric ways to not have a system listen on any publicly accessible IP address whatsoever. If it's really something critical you should be looking at a combination of making it purely an intranet service only, listening on an IP in an internal network block that isn't accessible from global routing tables at all. Or is completely firewalled off from the world, and only accessible once you've authenticated yourself to your VPN. Or only reachable once you first authenticate (public/private keys, two factor crypto key auth, etc) to a bastion host, and then reach the system from the bastion.
You're against this because it gives a false sense of security and might make people relax other measures, but then you suggest going with an intranet, which is widely known to create a culture of "if it's in the intranet it's safe" which is very detrimental to security as well - thought it was curious.
I am 100% in agreement with you on that point. The most common and risky thing about building an intranet type environment is that it can lead to a false sense of complacency. What is needed is both a belt and suspenders type approach to hardening the daemons and security on individual servers and things that are within the intranet, and also security measures designed to only allow authorized endpoint clients to get into the intranet. Essentially one needs to treat the individual servers and things that are in the private IP space as if they were still facing the public internet, even if they are not.
What you absolutely never want to do is create an environment that is metaphorically like a uncooked egg, after getting through the outer shell layer, things are soft and squishy inside.
There's a principle in security called "defense in depth". Your servers shouldn't be SSHable from the public Internet, but even if that's bypassed somehow, there should still be other layers of security. Each layer of protection adds security.
There's also a principle in security that's called simplicity.
"Defense in depth" is often quoted when people want to add further complexity to a system. There are cases where adding a security mechanism that adds complexity has a benefit that is so large that it's justified (e.g. adding TLS or ASLR). But it always needs to be balanced, because complexity adds attack surface.
The system linked here seems like it's adding a whole lot of complexity and only has a very weak case to be made for what it's good for.
Not making your servers SSHable from the public Internet is absolutely worth it though, and has better simplicity than exposing them (which requires setting up firewall/NAT routes).
To be clear, we haven't been talking about the OP port-shuffling scheme for many posts now in this subthread. We're talking about not having your servers be externally SSHable, period.
Is there any reference material about the culture of "if it's in the intranet it's safe"? I have had this problem with some enterprise clients, but I would like to have reference material that I can use as an authoritive source.
An obvious thing that springs to mind is that the default campus network design that is in all the standard Cisco designs let to basically everything being vulnerable when the last exchange exploit hit or the last solarwinds issue occured. But I would like to have some sources so that I can make a better case to senior management.
90s security was about protecting perimeters and network boundaries. That kind of approach, network segregation/firewalls to keep your data secure, leads to the idea that you are magically protected across impenetrable network boundaries. Which leads people to think insecure protocols are OK on the LAN, or patching policy can be slower etc. These days you would treat the LAN as untrusted and start from there. Assume already compromised. And focus on people, processes, technology, and data. Where is the corporate network boundary these days anyway in the COVID/WFH era? People's homes with all their insecure equipment? Of course you still would have your network segmentation. But as part of defence-in-depth. You just assume it's ineffective or will be circumvented, which it often trivially is: phishing, social engineering etc.
I know all of these. I also don't need a reference to Zero Trust or beyondcorp. What I'm asking for is specifically authority that can be quoted in an enterprise context to make a case for these issues.
I would however also like to hear cases against Zero Trust and Beyondcorp. The most obvious I see with the old approach is that oftentimes Engineers in those environments are not able to work and when security punches holes into the old system the whole thing becomes way more insecure than they're actually aware of.
There current reinvention of not trusting an intranet goes by "zero trust" and "beyondcorp" in the consultancy and IT management whitepaper circles, but they pile on a bunch of dynamically configured tunneling and antivirus/client pc attestation things. The previous iteration was more meme-ish, search for "there is no perimeter".
It is, to say the least, not conventional wisdom among security engineers that simply having network segmentation is detrimental to security. The concepts you're alluding to --- "Beyond Corp" and "Zero Trust" --- are subtle, and heavily cargo culted.
Avoiding something like this because the possibility exists that someone will do something else wrong is just silly.
Also, the amount of CPU taken by brute force ssh is not zero. There are plenty of good reasons for this. Even if this by itself isn't the best implementation, it's an example of how to make things much, much harder on attackers.
The amount of CPU taken by brute force SSH on any modern system is negligible - unless we're talking about traffic levels that would qualify as a DDoS. Maybe 0.02 points on a standard unix load scale. In any case you should have something like fail2ban or its equivalent that blackholes traffic from repeated failed attempts to authenticate, not just to your public facing ssh daemon, but lots of other things. The default debian fail2ban daemon configurations, easily toggled on or off to watch various log files, are quite sensible.
> Or is completely firewalled off from the world, and only accessible once you've authenticated yourself to your VPN. Or only reachable once you first authenticate (public/private keys, two factor crypto key auth, etc) to a bastion host, and then reach the system from the bastion.
There's a lot of attack surface in there. Port-knocking is supposed to be a way to reduce attack surface. It's a belt-and-suspenders approach to the reality that even fully patched openssh has exploitable bugs.
Using this tool, a MITM with an openssh 0day can just follow you in. KnockKnock [0] and tools like it do not suffer from this defect. This tool is conceptually similar to KnockKnock, using OTP instead of a monotonic counter. Using OTP opens it up to replay attacks.
> Port-knocking is supposed to be a way to reduce attack surface.
No, it's a bet that your port knocking tool has less (or better tested) attack surface than OpenSSH.
OpenSSH is pretty thoroughly tested by now, and the pre-auth parts runs with very little privileges.
The specific port knocking tool linked to above seems to expose very little, but there's still some logging going on that wouldn't happen otherwise and the potential for logic bugs in the python stuff. It's not an obvious bet to take.
Not much of an insight perhaps, just an observation. Risks are notoriously hard to quantify.
But where there's an attack surface there is a risk. There's logging and parsing of logs going on here.
Does that translate to practical risk, in the sense that your system will get owned in this way? Personally I wouldn't consider it very likely. A Linux box won't get popped via a plain open openssh but likely not via this python log parser either. It's still not a bet I would take.
There's so much going on in a network stack that I would look for bugs there before the same in pre-auth openssh but one does not know for certain until after the fact.
> There should be no need to do this if you have a properly configured public/private key auth setup and disable password based login.
Maybe, but I find that simply moving off of default port 22 drops the number of people attacking me by at least two orders of magnitude. That's not nothing.
Yeah I suspect that up to date openssh with a config that passes ssh-audit's checks, with a fail2ban config, along with an ed25519 key unlocked by a yubikey will be entirely adequate security for SSH. Then time would be better spent securing VPNs and reducing internal trust.
Bit like having a front door that would withstand a C4 blast, but now all your windows are shattered.
I don't think anyone but the most uninformed would argue that fail2ban is an actual secure measure, it's more of a log file annoyance reducer. Obviously if you leave remote root enabled and the root password is one of a few dozen thousand common words that exist out there in public-domain password data sets, fail2ban isn't going to help much. As with the example of all the random botnet things out there that randomly try a popular list of dozens of common usernames (admin, root, webmaster, etc) with common passwords.
With things other than SSH it can also be effective in the most rudimentary first level filtering out of spam, various things that attempt to relay mail through my server get themselves first banned, and then banned for a longer time after they keep re-trying. Again with primarily the goal of having less cluttered postfix logs.
fail2ban is the same as having high number of rounds on password hashes to slow down attackers, and it takes about 30 seconds to install+configure. it makes a lot more sense than the title here, but is only useful as security-in-depth and can't replace other good practices. high rounds on a password hash is also equally useless if you use "password123" or something like that.
i've also seen significant reductions in idle cpu by using it and sending offenders to the timeout bin for 24h.
I wasn't calling you most uninformed, your statement is totally correct, I was referring to any persons who might argue that fail2ban is purely useless as security theater. What I said was that fail2ban is useful as an annoyance/log clutter reducer, but not something that's an actual security measure which would be suitable to protect a poorly-configured sshd.
I've often used fail2ban not as a security control, but a log hygiene solution. I don't need 10000s of failed login attempts in my logs, it's annoying.
I have full faith it's not stopping a server compromise, but it absolutely keeps the noise level down.
I had inherited a server (Arch linux) which ran something like fail2ban (can't remember what it's called). It slowed down the machine tremendously, because the iptables lists became very big, and every packet started taking up too much CPU. I had to disable it (switching to whitelisting instead). Did you ever encounter something like this with fail2ban?
at a previous job i cleaned up after such mess. they used to have fail2ban adding thousands of rules without ever deleting them automatically. I replaced it with a pam module that maintained an ipset for addresses with failed login attempts.
The fact that there are dozens of similar solutions out there says otherwise. These aren't the kind of tools that people build for fun. They fill a need. Remember, perfect can be the enemy of good.
> And of course keep up to date on openssh patches and security advisories.
What about 0-day security breaches? Often fix preparation takes months before been available to end users. All this time your system is vulnerable.
Your point seems valid and reasonable, however looking a couple years in the past there was a situation where you would be screwed with that setup: https://www.debian.org/security/2008/dsa-1571
That was the infamous security flaw where SSH keys generated on debian/ubuntu were always out of a set of 32768 keys due to lack of entropy in key generation. So if your SSH setup is compromised like this, the approach in the article would have provided an additional layer of security.
Before doing something like this I would worry a lot more about client endpoint security (exactly to what level do you fully trust all the people and workstations/laptops that are authorized to ssh to this thing?), as an overall more likely threat.
There are also lots of less esoteric ways to not have a system listen on any publicly accessible IP address whatsoever. If it's really something critical you should be looking at a combination of making it purely an intranet service only, listening on an IP in an internal network block that isn't accessible from global routing tables at all. Or is completely firewalled off from the world, and only accessible once you've authenticated yourself to your VPN. Or only reachable once you first authenticate (public/private keys, two factor crypto key auth, etc) to a bastion host, and then reach the system from the bastion.