People have been misinterpreting "security by obscurity is bad" to mean any obscurity and obfuscation is bad. Instead it was originally meant as "if your only security is obscurity, it's bad".
Many serious real-world scenarios do use obscurity as an additional layer. If only because sometimes, you know that a dedicated attacker will be able to breach, what you are looking for is to delay them as much as possible, and make a successful attack take enough time that it's not relevant anymore when it's broken.
In nature, prey animals will sometimes jump when they spot a predator[1]. One of the explanations is that this is the animal communicating to the predator that it is a healthy prey animal that would be hard to catch and therefore the predator should choose to chase someone else.
I think we can kind of view obscurity in the same way. It's a way to signal to a predator that we're a hard target and that they should give up.
Of course in the age of automation, relying on obscurity alone is foolish because once someone has automated an attack that defeats the obscurity, then it is little or no effort for an attacker to bypass it.
Of course, sprinkling a little bit of obscurity on top of a good security solution might provide an incentive for attackers to go someplace else. And I can't help but think of the guy who was trying to think of ways to perform psychological attacks against reverse engineerers [2].
>I think we can kind of view obscurity in the same way. It's a way to signal to a predator that we're a hard target and that they should give up.
This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here (or nothing here worth your time). One of the best examples (it's in the article!) is changing the default SSH port. Just by obscuring your port you can usually filter out the majority of break-in attempts.
The only way security through obscurity signals to "predators" is if they've seen past your defence, and thus defeated the obscurity. Obscurity (once revealed) is not a deterrent. Likewise an authentication method (once exploited) is not a deterrent.
>Of course in the age of automation, relying on obscurity alone is foolish because once someone has automated an attack that defeats the obscurity, then it is little or no effort for an attacker to bypass it.
This is true of any exploit basically. Look no further than metasploit. Another example: a worm is a self-automating exploit.
> This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here (or nothing here worth your time).
Most of the usages of "security through obscurity" that I've seen dissected and decried haven't been in the sense that something was being hidden, but rather that something was being confused. For example, using base 64 encoding instead of encrypting something. Or running a code obfuscator on source code instead of making the code actually secure.
Either way the economic costs that I'm talking about are valid. If an attacker sees that your SSH port isn't where it's supposed to be OR if an attacker sees that your SSH port ignores all packets sent to it (unless you first send a packet thats 25 0xFF bytes), then either way they're being signaled that you are more trouble than the computer that has an open telnet port.
There are slightly different usages of the same word, but the effect looks to me to be the same. More investigation or automation can make the obscurity go away, but it does make things a bit harder.
Fair point! Obscurity as confusion is not what I had in mind, but your points on confusion are totally valid. Your analogy with predators works better here.
Using base64 encoding, or encrypting your database, are both examples in the article. While I agree base64 is super trivial, the point about either of these is defence in depth. In the language of the article, it's reducing likelihood of being compromised.
>If an attacker sees that your SSH port isn't where it's supposed to be OR if an attacker sees that your SSH port ignores all packets sent to it (unless you first send a packet thats 25 0xFF bytes), then either way they're being signaled that you are more trouble than the computer that has an open telnet port.
This is semantics. Personally I'd say if an attacker cannot sense anything to connect to, there is no "signal" you're sending. You're rather not sending a signal that you're a threat, as you're not sending a signal at all due to being functionally invisible. Otherwise, we could say literal nothingness is sending the same signal that your server is. We agree on the substance here, i.e. the obscurity increases the economic cost of hacking and works as a disincentive, so we may just agree to disagree on the semantics.
most people have firewalls configured to simply drop traffic not destined for open ports, in which case there is no response as the traffic never makes it beyond the firewall.
“Security through obscurity” means something like e.g. “uses a bespoke unpublished crypto algorithm, in the hopes that nobody has put in the effort to exploit it yet.”
Usually this is a poor choice vs. going with the published industry standard, because crypto is hard to get right, and people rolling their own implementations usually screw it up, making life much easier for dedicated attackers than trying to attack something that people have been trying and failing to breach for years or decades.
Software makers for example typically don’t publish the technical details of their anti-piracy code. But this usually doesn’t prevent software that people care about from being “cracked” quickly after release.
Banking Software uses all sorts of security through obscurity. Infact Unisys used to make custom 48bit CPUs for their clearpath OS to make targeting the hardware very difficult without inside knowledge of the chip architecture.
You are making the same argument this article is trying to explain to you. Security by Obscurity is not bad because _on its own_ its not enough, its good because coupled with other layers it adds security.
I have been told to remove security by obscurity layers from systems by people that don't grok this.
Security was, in a few cases, reduced to nothing.
Systems that have one industry standard approach only laid totally open on the Internet due to a single misconfiguration or a single CVE being published.
Any other layer would have helped, however "insecure", but they were removed due to the misconception that the layers themselves were "insecure".
I would go so far as to say the first layer should always be security by obscurity for any unique system.
If you fire up a web server and have the first security requirement that each http request must have the header X-Wibble:wobble I promise you this layer of security will be working hard all day long. Cheap, impossible to get wrong, it's not sufficient but it works.
Using a non-standard SSH port is a bad example because nmap can see through that deception in a few seconds. Any attacker who is looking for more than just the lowest of low-hanging fruit will not be even slightly deterred.
A better example would be a port-knocking arrangement that hides sshd except from systems that probe a sequence of ports in a specific way. This is very much security by obscurity, because it's trivial for anyone who knows the port sequence to defeat, but it's also very effective as anyone who doesn't know the port sequence has no indication of how to start probing for a solution.
> Using a non-standard SSH port is a bad example because nmap can see through that deception in a few seconds.
Compared to milliseconds. Do yourself the favor and open one sshd on port 22 vs one on a port >10000, then compare logs after a month. The 22 one will have thousands of attempts; the other one hardly tens if even any.
The 99% level we're defending against here is root:123456 or pi:raspberry on port 22. Which is dead easy to scan the whole IPv4 space for. 65K ports per host though? That's taking time and, given the obvious success rate of the former, is not worth it.
Therefore I'd say it's the perfect example: It's hardly any effort, for neither attacker nor defender, and yet works perfectly fine for nearly all cases you'll ever encounter.
I know we've spoken in another thread, but I think it's important for people to understand that this sshd thing is a perfect example of why it isn't this easy: You reduce log spam moving to a non-privileged port, but also reduce overall security - a non-privileged user can bind to a port above 10k, but can't bind to 22. sshd restarts for an upgrade, or your iptables rules remapping a high port to 22 get flushed, that non-privileged user that got access via a RCE on your web application can now set up their own fake sshd and listen in to whatever you are sending if it manages to bind to that port first and you ignore the host key mismatch error on the client side.
Or you can implement real security, like not allowing SSH access via the public internet at all and not have to make this trade off.
Here's a counter-example (I said else-where in this thread):
Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.
I'll also point out that we're generally talking about different threat vectors here, so it's good to lay them out. I don't think obscurity helps against a persistent threat probing your network, it helps against swarms.
> a non-privileged user can bind to a port above 10k, but can't bind to 22. sshd restarts for an upgrade, or your iptables rules remapping a high port to 22 get flushed, that non-privileged user that got access via a RCE on your web application can now set up their own fake sshd and listen in to whatever you are sending if it manages to bind to that port first and you ignore the host key mismatch error on the client side.
This is getting closer to APT territory, but I'll bite. If someone has RCE on your SSH server it honestly doesn't matter what port you're running on. They already have the server. You're completely right it would work if you have separate linux users for SSH and web server. Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them). But let's assume it here. In reality, even if you did have this setup this is a skilled persistent threat we're talking about (not quite an APT, but definitely a PT). They already own your website. Your compromised web/SSH server is being monitored by a skilled hacker, it's inevitable they'll escalate privileges. If they're smart enough to put in fake SSH daemons, they're smart enough to figure something else out. Is your server perfectly patched? Has anyone in your organization re-used passwords on your website and gmail?
You're right that these events could happen. But you have to ask yourself what's actions of yours will have a bigger impact:
* Changing to non-standard SSH port, blocking out ~50% of all automated hacking attempts. Or port-knocking to get >90% (just a guess!).
* Use standard port, but you still have an APT who owns your web server and will find other exploits.
>Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.
Yep! And I should be clear: I am not saying just don't change the SSH port. I'm saying if you care about security, at a minimum disallow public access to SSH and set up a VPN at a minimum.
>Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them).
I'm a bit confused here. In every major distro I've worked on (RHEL/Cent, Ubuntu, Debian, SUSE) the default httpd and nginx packages are all configured to use their own user for the running service. I haven't seen a system where httpd or nginx are running as root in over a decade.
I think the bare minimum for anyone that is running a business or keeping customer/end user data should be the following:
1) Only allow public access to the public facing services. All other ports should be firewalled off or not listening at all on the public interface
2) Public facing services should not be running as root (I'm terrified that you've not seen this to be the case in the majority of places!)
3) Access to the secure side should only be available via VPN.
4) SSH is only available via key access and not password.
5) 2FA is required
I think the following are also good practices to follow and are not inherently high complexity with the tooling we have available today:
1) SSH access from the VPN is only allowed to jumpboxes
2) These jumpboxes are recycled on a frequent basis from a known good image
3) There is auditing in place for all SSH access to these jumpboxes
4) SSH between production hosts (e.g. webserver 1 to appserver 1 or webserver 2) is disabled and will result in an alarm
With the first set, you take care of the overwhelming majority of both swarms and persistent threats. The second set will take care of basically everyone except an APT. The first set you can roll out in an afternoon.
With the first set, you take care of the overwhelming majority of situations.
Protecting sshd behind a VPN just moves your 0day risk from sshd to the VPN server.
Choosing between exposing sshd or a VPN server is just a bet on which of these services is most at risk of a 0day.
If you need to defend against 0days then you need to do things like leveraging AppArmor/Selinux, complex port knocking, and/or restricting VPN/SSH access only to whitelisted IP blocks.
Except you don't assume that just because someone is on the VPN you're secure.
If the VPN server has a 0day, they now have... only as much access as they had before when things were public facing. You still need there to be a simultaneous sshd 0day.
I'll take my chances on there being a 0day for wireguard at the same time there's a 0day for sshd.
(I do also use selinux and think that you should for reasons far beyond just ssh security)
A remote code execution 0day in your VPN server doesn't give an attacker an unauthorized VPN connection, it gives them remote code execution inside the VPN server process, which gives the attacker whatever access rights the VPN server has on the host. At this point, connecting to sshd is irrelevant.
Worse, since Wireguard runs in kernel space, if there's an RCE 0day in Wireguard, an attacker would be able to execute hostile code within the kernel.
One remote code exploit in a public-facing service is all it takes for an attacker to get a foothold.
I do not run my VPNs on the same systems I am running other services on, so an RCE at most compromises the VPN concentrator and does not inherently give them access to other systems. Access to SSH on production systems is only available through a jumphost which has auditing of all logins sent to another system, and requires 2FA. There are some other services accessible via VPN, but those also require auth and 2FA.
If you are running them all on the same system, then yes, that is a risk.
For a non-expert individual who would like to replace commercial cloud storage with a self hosted server such as a NAS, do all these steps apply equally?
I am limiting the services to simple storage.
Looks like maintaining a secure self cloud requires knowledge, effort and continuous monitoring and vigilance.
Most of those are good practices for a substantial cloud of servers that are already expected to have sophisticated configuration management. They're easy to set up in that situation, and a good idea too because large clouds of servers are an attractive target - they may be expected to have lots of private data that an attacker might want to steal and lots of resources to be exploited.
A single server run by an individual and serving minimal traffic would have different requirements. It's a much less attractive target, and much harder to do most of those things. For example, it's always easy and a good idea to run SSH with root login and password authentication disabled, run services on non-root accounts with minimum required permissions, and not allow things to listen on public interfaces that shouldn't be. Setting up VPNs, jumpboxes, 2FA, etc is kind of pointless on that kind of setup.
>Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.
But how much of a threat is this? Who's going to drop a ssh 0day with PoC for script kiddies to use? If it's a bad guy he's going to sell it on the black market for $$$. If it's a bad guy he's going to responsibly disclose.
>You're right that these events could happen. But you have to ask yourself what's actions of yours will have a bigger impact:
>* Changing to non-standard SSH port, blocking out ~50% of all automated hacking attempts. Or port-knocking to get >90% (just a guess!).
But blocking 50% of the hacking attempts don't make you 50% more secure, or even 1% more secure. You're blocking 50% of the bottom of the barrel when it comes to effort, so having a reasonably secure password (ie. not on a wordlist) or using public key authentication would already stop them.
It makes the logs less noisy. And with much less noisy logs it is easier to notice if something undesirable is happening. Also from my experience this 50% is more like 99%.
> Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them).
If you made a list of things like this which annoy you, I would enjoy reading it.
> Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.
And with all those compromised servers they could easily scan for sshd on all ports.
Well, there's basically two stances you can reasonably take:
1) SSH is secure enough just by using key based auth to not worry about it.
2) SSH isn't secure enough just by using key based auth so we need to do more stuff.
If you believe #1, then you don't need to do anything else. If you believe #2, then you should be doing the things that provide the most effective security.
Personally, I believe #1 is probably correct, but when it comes to any system that contains data for users other than myself, or for anything related to a company, I should not make that bet and should instead follow #2 and implement proper security for that eventuality.
I'm willing to risk my own shit when it comes to #1, but not other people's.
The range in the figures is surprising. I leave everything on port 22, except at home where due to NAT one system is on port 21.
On these systems, since 1 September:
lastb | grep Sep\ | wc -l
160,000 requests (academic IP range 1),
120,000 requests (academic IP range 2),
1,500 requests¹ (academic IP range 3),
1,700 requests² (academic IP range 3),
180,000 requests³ (academic IP range 3, just the next IP),
80,000 requests (home broadband),
14,000 requests (home broadband — port 21),
5,000 requests (different home broadband, IPv4 port)
0 requests ( ,, ,, IPv6 port)
¹²³ is odd. All three run webservers, ² also runs a mailserver, yet they have sequential IP addresses.
I don't bother with port knocking or non-standard ports to ensure I have access from everywhere, to avoid additional configuration, and because I don't really see the point when an SSH key is required (password access is disabled).
Good example, but doesn't help his point, which was:
> This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here
An attacker scanning the whole IPv4 space won't think "ah, there's no ssh on port 22, there's no ssh to attack". They will think "yep, they did at least the bare minimum to secure their server, let's move on to easier targets".
I have 0 in the last 14 days on port 2xxx. Probably depends a lot on your IP range (I'd assume AWS etc is scanned more thoroughly) and whether you've happened to hit a port used by another service. But even in commercial ranges, I've seen hardly any hits on >10k.
But I have only anecdotal evidence as well, so my guess is as good as yours.
The article addresses this. He did a poll, and just under 50% of people use the default ports. So just by changing your default port, you eliminate half the break-in attempts.
Now you're absolutely right that this only deters less-skilled/inept hackers, a more competent hacker easily gets past this. But it's worth dwelling on the fact that we still stopped a substantial number of requests. Port knocking is definitely an improvement (i.e. more obscure). I'd guess with port-knocking more than 90% (even 99%) of attempts would completely miss it. The goal here isn't to rely completely on obscurity. It's security in depth. Your SSH server should still be secure and locked down.
The other question with this is what's your threat vector. Most people decry security through obscurity because an APT can easily bypass it. They can, but most people trying to hack you are script kiddies. Imagine an SSH exploit was leaked in the wild – all the script kiddies would be hammering everything on port 22 immediately.
The poll is my biggest issue with an otherwise agreeable article, the sample size and representation on Twitter doesn't make for anything close to reliable percentages.
I understand its use as a demonstrative aid but especially in the context of security, hinging your policies on the outcome of a Twitter poll seems like... well, security through obscurity.
Maybe a bit nitpicky but I think port-knocking is in kind of a grey area. You can think of it as a kind of password where you have to know the correct series of ports. Since the number of ports is quite large, there is also a correspondingly large number of possible port sequences so you can't, in principle, brute force it without a lot of effort.
> Maybe a bit nitpicky but I think port-knocking is in kind of a grey area. You can think of it as a kind of password where you have to know the correct series of ports.
Yes.
But you also have to know that port knocking is enabled at all. That's the obscurity part.
Eh, I think its how others have expressed in this thread.
Security by obscurity is bad, not because it is bad to have well secured countermeasures, but because it encourages poor thinking with regards to methods you have in place, and additionally because it usually introduces extra, unintended attack vectors.
You suffer from your own 'obscurity' - whether its because you forgot you had to port knock, use a different port, or because you somehow managed to leave a new exploit in some obscured code due a bug in the obscured code, or because you managed to open yourself up to an RCE with the port knocking, or some other obscure scenario you did not intend to create from whatever obscurity you created.
I think this is different from Defense in Depth, which just says have more than one countermeasure in place, and to keep the counter measures 'separate', but well defined. Port knocking, but on a different box than your vpn box, on a different box than your ssh box.
We aren't told to use 'well-defined' passwords, like 1234, obviously. If the point of 'obscurity' is 'secrets', that's all well and good, but that's not security, that's a password. Have a password, by all means, but tunnel it over TLS and use well defined security paths versus creating unnecessary risks.
> [port-knocking] is very much security by obscurity, because it's trivial for anyone who knows the port sequence to defeat
in what way is this different than a passphrase you don't know? i can trivially defeat any password which i already know, too :D
while discovering a non-standard ssh port is easy, discovering a port-knock sequence out of a possible ~65k per knock is impractically difficult (assuming the server has any kind of minimal rate limiting). a sequence of eight knocks will need 65k^8 attempts - and that's assuming you already know which port will be opened, which of course you won't.
you can even rely on just port-knocking of 8 ports and already get ~2^48 bits of entropy, which is about the same strength as a random 8 char alpha-numeric latin-charset password.
I agree with you that the example is not the best, but obscurity has a lot of benefits. We did an experiment with a few students on obscuring a WordPress installation some years ago to catch ppl scanning for certain plugins. That gave us the ability to use the regular paths as honeypots. Gives you an ability to detect 0-day attacks as well.
I just turn off password authentication on SSH and moved to keys, then moved to IPv6. The automated scans haven't made it to v6 yet. The only better thing I could do is have an external v4 SSH honeypot that moves as slowly as possible to tie up a (tiny) resource.
IPv6 seems to be a good example of security by obscurity, with up to 64 bits of random IP adresses per machine, making scanning impossible in practice ?
Calling an administrator account “9834753” obscures it’s purpose and may reduce the likelihood of a compromise attempt as opposed to “databasesuperadmin”. But that doesn’t mean that you don’t need a good security token.
I find the SSH example slightly odd, when all you need to do is disable password authentication and root access. Moving away from port 22 just seems a little excessive?
In other words, changing the default SSH port number is similar to using camouflage. It just helps hide that something is there, but it does nothing to improve the defense once spotted. However, if the majority of predators don't see you, then the rest of your defenses are needed at that time.
It's also an indication that there are no default passwords in use. So even if you know what port SSH lives on, there's a lower ROI to attacking it than a default port.
I think it is the opposite -- systems with rigourous security tend to be more open, because the designers are confident they understand their system. In contrast, systems that practice security through obscurity are often owned/managed by people afraid of what will go wrong.
We should distinguish obscurity from intentionally hiding the configuration, which makes attackers undertake discovery, and hence can lead to detection. But your internal red team / security review should have all the details available. If loss of obscurity leads directly to compromise then you don't have security. Cf insider threat.
Your example is advertising which is the opposite of security through obscurity.
Obscurity is another layer of hiding or indirection: like the owl has camouflage and it has a hole in a tree.
Advertising your fitness (your stotting metaphor) is effective when: you are part of a herd, and the attacker will only attack the weakest in that herd and then be satisfied. Like double locking your bike next to a similar bike that has a weaker lock.
Computer security is different because usually either:
a) everyone in the herd is being attacked at once (scattergun/IP address range scanning), or
b) you are being spear targeted individually (stotting won’t work against a human hunter with a gun, and advertising yourself won’t help against a directed attack).
An example of advertising your security might be Google project zero, or bug bounties.
>An example of advertising your security might be Google project zero, or bug bounties.
That's more akin to a gecko sacrificing it's tail IMO. You're taking a predator that's capable of a successful attack and rewarding them for not doing it at some cost to yourself. It provides an easy and less risky way of getting paid.
Using obfuscation is often a signal that you are a weak target, because there are a lot of places that use obfuscation but nothing else. A better indicator that you are a hard target is to enable common mitigations like NX, stack cookies, or ASLR.
There is one giant hole in your argument: both stack cookies and ASLR are mitigations that are nothing more than automated security through obscurity in the first place.
I assume you're equating picking a random SSH port with scanning for an ASLR slide or guessing a stack cookie, but they are different situations: processes that die are generally treated quite seriously, and they leave "loud" core dumps and stack traces as to what is going on–usually this gets noticed and quickly blocked. With SSH you can generally port scan people quickly and efficiently in a fairly small space (and to be fair, 16-bit bruteforces are sometimes done for applications as well, when possible)–and the "solution" here where you ban people that seem to hit you too often is literally what you are supposed to be running in the first place.
And in general, the sentiment was "if you are using those things, you are likely to have invested time into other tools as well such as static analysis or sanitizer use" which are not "security through obscurity" in any sense, whereas the "obscurity" that gets security people riled up is the kind where people say things like "nobody can ever hack us because we changed variable names or used something nostandard" because it is usually followed with "…and because we had security we didn't hash any passwords".
How so?
Stack cookies and ASLR are a form of encryption, where an attacker had to guess a random number to succeed in an attack.
Obscurity really just boils down to a secret that doesn't have mathematical guarantees. It's doing something that you think the attacker won't guess, just like an encryption key, but without the mathematically certified threat model, so you just hole that the attacker is using a favorable probability distribuy for their guesses
The attacker who had already compromised the integrity of the system in question has to guess or probe for a random number with relatively low entropy in order to do something useful and straightforward with that already compromised system.
Yeah, that's what I was trying to get at with my "in the age of automation" comment. If you go to a period in history without automation, then obscurity is going to be a lot more effective. And that's why I think people still want to go back to it. Obscurity is much easier to wrap your mind around than RSA, et al.
However, the psychological warfare video does make me think that there's still a place for obscurity after you've already used actual security measures. If you can find any technique that makes your attacker work harder vs some other target, then it feels like there's an economic value to doing it as long as the cost to you is relatively low.
The only downside I see immediately is that there's a counterweighted risk to obscurity in your security layer: you can confuse your own users (or yourself).
Many security tools I've used are downright user hostile in how little information they provide the end-user (or the admin!) regarding why an auth process failed. It incentivizes people to simplify or bypass the system entirely when they can't understand the system.
Semirelated. Anytime I have written a protocol with a checksum I implement a 'magic checksum' that just passes. And a debug mode that enables it and diagnostics. The reason is usually if somethings wrong with a packet of data the best thing to do is ignore it completely. But that makes development insane. So having two modes gives you the best of both worlds.
When moving to some scary streets in my travel,I would shout to my companion in the local language, trying to signal to potential thugs to choose chasing someone else. I did it once in Russia while in a dodgy neighborhood to buy vodka, and back in the 1990 when westerners were under some threats in the middl east.
> In nature, prey animals will sometimes jump when they spot a predator[1]. One of the explanations is that this is the animal communicating to the predator that it is a healthy prey animal that would be hard to catch and therefore the predator should choose to chase someone else.
I think this analogy perfectly explains my hostility to security by obscurity. When I see a system that uses standard ports and demonstrates best practices, I think "oh well, they probably know what they are doing." When I see a system using strange ports and / or has extra extraneous crypto, I think "well, maybe this guy is an idiot" and take a deeper look.
I've heard a better analogy - security by obscurity is like camouflage on a tank. A tank has massive armor and a terrifying gun to defend itself with. But even a half-assed camouflage can delay enemy reaction by a few seconds. Sometimes it's all it takes, because it lets you shoot first. In addition, the cost of camouflage paint or a net is laughably low and can be replaced in the field. It's simply an extra layer of protection and a very inexpensive one.
Its also a terrible thing to say to your pointy haired boss, which is in part where I think the heresy of talking about it comes from. If you say "security" that's the last thing your boss heard you say "oh yeah obscurity, great, that's cheap!" and you end up in camouflage in the field, with no tank.
I don't like the article the more I think about it, it speaks of inexperience and perhaps not understanding the concept well, and that it is important to accurately articulate and distinguish concepts. No one is going to argue that camo isn't an advantage to a soldier, but it is not security in any meaningful sense, no more than camouflage is a bunker, or a trench, or a tank.
And Camo comes with a real downside too, just like in the field, if you're camo'ed too well, you're apt to take friendly fire or be missed by artillery lobbing a shell.
An argument against obscurity is that it adds additional pains for your "regular" users (as in developers/3rd party developers/app developers) while being a small deterrent against unauthorised users (as they will be able to circumvent the "obscurity layer" and replicate their method to other bad actors).
edit: In the first sentence "against" is not what I wanted to say: what I wanted to say is that it "downgrades it's effectiveness".
I agree that obscurity can and sometimes should be a layer of security.
Even though someone took the challenge to de-obfuscate most (but not all) of the protections, just look at how much effort is required for anyone else to even follow that work. More importantly, consider how much effort is required relative to other platforms. It's enough of a pain that spammers and abusers are likely to choose other platforms to attack.
If you cannot distinguish a trusted party from a malicious party everything is then potentially malicious. This is why we have certificates, certificate revocation, and trust authorities.
And that works great until a trust authority gets compromised. It's for this reason why the US DoD has it's own root certificate authorities and thus many military websites actually look like they have invalid https certs. Browsers don't ship with DoD root certs installed as trusted.
Yeah, I am on a DODIN as I write this. In the civilian world a CA falls back on a decentralized scheme called Web of Trust which allows CAs to recipricate certs from other CAs and invalidate other CAs as necessary.
The DOD chose to create their own CA scheme originally for financial reasons in that over a long enough time line new infrastructure pays for itself with expanded capabilities while minimizing operation costs dependent upon an outside service provider. This was before CACs were in use.
Thanks for the additional info, I didnt know (but probably should have assumed) that finance was the primary motivator. I just had to implement CAC authentication for a webapp and they still use their own CAs for client-side certs aka cac’s so it seems like it was a pretty savy investment at the time that’s not going away anytime soon
Agreed. The maxim warning against "security from obscurity" is often reduced to an irrational comprehensive avoidance of obscurity. It's similar to the irrational avoidance of all performance optimization because Knuth warned of premature optimization.
Both reductions lose practical utility by omitting nuance.
* Avoid wasting your time doing performance optimization until tuning is necessary. But definitely take obvious and easy measures to ensure your software is fast, such as choosing a high-performance language or framework with which you can be productive.
* Don't exclusively rely on obscurity. But definitely take obvious and easy measures that leverage obscurity to add another layer of defense, such as changing default ports, port-knocking, or whatever.
To use the same art of reduction to counter the common interpretation: A complex password is, in a manner of thinking, security from obscurity. Your highly complex password is very obscure, hence it's better than a common (low obscurity) password from a dictionary.
> But definitely take obvious and easy measures that leverage obscurity to add another layer of defense, such as changing default ports, port-knocking, or whatever.
Except that can lead to operational problems down the road. For example "oh yes, we're nice and secure, not only do you need a 512bit private key to get into this device, you also need to connect from a secure network"
Then along comes covid, and you can't get into the building.
"Oh dear, you're not on the secure network, you can't come in"
So you spend 2 hours (while your network isn't working right and you're losing customers) finding and getting in through a back door.
I would call that system secure. It does not just rely on an obscure password but is actually restricted by a list of whitelisted networks.
The failure in that case is only that the admin didn't consider that normal work might be done from home at some point or that the middle or upper manager thinks that he should be able to freely administrate his critical infrastructure from anywhere...
IP whitelists break so often for "unanticipated reasons" that I've lost all sympathy for not anticipating it. Doubly so for using a whitelist to lock yourself out of the whitelist admin.
It's so common the security community should make it a meme to spread awareness: Don't get pwned by DHCP while running from SSH 0-day RCEs.
> Except that can lead to operational problems down the road.
in the example you mention the 'security' is working by design, but the operational parameters changed which in turn made that security model unsuitable - so it is the parameter change, rather than the 'security' is what led to the problems.
The original system could have been just as 'obscure' but also included an appropriately secured mechanism that allowed for this kind of remote access / disaster scenario.
That isn't obscurity. And in your scenario, there was a security hole if the requirement was that you had to be on the intranet, but someone was able to gain access from the outside.
The number of times I've seen people shitting all over port knocking is truly confusing. Since we added it several years ago, we've not had a single case of hackers trying to break into sshd. Before port knocking, 100's a day, even though it was on a very unusual port.
I try to tell people this, when they poo poo port knocking, but they just don't get it.
But serious question -- what exactly is the benefit? Before, it's not like they were getting in anyways if you were using keys.
So I confess I still don't "get it". Unless you just want cleaner logs or something. I assume you're still getting the same number of initial connection attempts per day, but just not recording them?
Is it something to do with network or CPU consumption related to failed subsequent attempts by the same actor? (Which, the same as port knocking, should be rate limited anyways?)
There have been bugs found in SSH server implementations that allowed limited remote code execution or even authentication bypasses. Missing an update or two isn't bad when nobody can figure out how to connect to your server.
Of course you have to update at some point. However, if someone drops a zero day on your SSH server while you're asleep you're probably glad that you've got a secret sauce to protect your server, letting the vulnerability bots focus on other servers.
If port knocking existing in a vacuum, sure. It'd be great.
The issue is there are other options that are better - like VPN only access to SSH - that you can use instead of (or in addition to)
If everyone advocating for port knocking was also saying set up VPN only access, sure. It's an additional authorization factor via where ports are used as a proxy for a PIN. But I haven't seen a single person in here saying they use it in addition to a VPN - people are saying it's their primary form of protection.
You can setup a wireguard VPN in as much time as it takes to set up port knocking. Now you have all of the benefits port knocking provides, and more. And you could even still set up port knocking in addition to the VPN if you really wanted to, but I would argue there's not much point.
Curious, how does this work? I am not very familiar with VPN.
Is the VPN connection setup for the SSH session only?
What if someone needs to have multiple SSH session, going to different networks altogether?
Im thinking it could be pretty impractical to go onto a whole other network to open an SSH session.
It depends on the implementation. For a client <-> server VPN, it creates an interface on your local machine that corresponds to the network address range for the VPN, and tunnels traffic to the remote end.
For a site to site VPN, two appliances create a tunnel between them, and traffic is routed over that tunnel via the same sort of routing rules you normally use.
> Is the VPN connection setup for the SSH session only?
It can be. It can also be configured for all traffic, or some other combination.
> What if someone needs to have multiple SSH session, going to different networks altogether?
You can have multiple VPN connections to multiple networks. It can get complicated if the VPNs are using overlapping IP space.
> Im thinking it could be pretty impractical to go onto a whole other network to open an SSH session.
I'm not entirely sure why. Millions of people use VPNs every day for a variety of reasons, including SSH. I currently have 8 saved VPN configurations in my wireguard client, and connecting to one is as simple clicking on the client and picking the one I need in the dropdown. Then I SSH as normal, except its to the server's private IP and not public.
Why aren't you concerned that bugs will be found in your port knocking implementation?
I think the main concern with port knocking is that it's observable. You're effectively sending your password in clear, so if someone can intercept or overhear your traffic then your secret is lost. Cryptographic authentication schemes like SSH itself or VPNs do not have this problem.
Port knocking is a way to decrease the amount of random 0day/brute force scripts finding their way into your server. It will only stop automated scripts and attackers that don't know who you are. It's obviously no protection against incentivized attackers.
A VPN has upsides and downsides. It obviously protects your server a lot better against directed attacks, but when you lose your laptop or when your computer gets ransomware'd, you can't get access to the server anymore.
Furthermore, code execution vulnerabilities have been found against VPN servers because of their immense complexity and OpenVPN can consume quite a lot of resources for a daemon doing nothing. WireGuard has changed the VPN landscape with its simplicity, but if you fear your server may not be updated all too often (because it's partially managed by a customer, because your colleagues might not care to do so after you leave), leaving a simple solution behind can have its upsides.
I'm not advocating that everyone should enable port knocking on their servers to make them secure or anything, but the "port knocking is always bad" crowd is often very loud despite the fact that there are small little ways port knocking can improve security with very little effort or increased attack surface.
From what people have told me is the point is to remove automated attempts from logs so that when someone actually works out how to try to connect it becomes a strong signal that you have a real attack and you can check the logs to see if they are using real usernames or some other info to suggest that they know more than just random spam attempts. Normally dedicated attackers blend in with the random noise of the internet.
It is as simple as reducing an attack surface. If attackers can't talk to sshd, then can't try to hack it. In a world where zero days are real, why chance it?
Why is that so hard to grasp. Still boggles my mind.
The same with "GnuPG is bad" mantra on hackernews. There is nothing better that GPG currently for all its functionality and the only answer you get when asking for substitute is don't use this function or use some obscure application. Yeah right.
I agree that there is nothing better than GPG for the narrow scope of encrypting email. But I think there are very few cases where encrypted email is the most secure way to communicate, in lieu of other forms of encryption.
Encrypted email is almost a marginal usage scenario for GPG compared to other uses. It does everything. It is everywhere.Yes it is big, nobody has to use all of it. Just like C++... oh wait it is unpopular on hacker news bubble too despite being a juggernaut of a language. It will still be relevant long after hacker news will be no more.
Informed analysis like lack of forward secrecy in something made for non ephemeral communication - for storing, sending files, digital signatures etc. Or backwards compatibility so you can access and verify your backups, archives etc. from 10 or more years ago.
Show me ephemeral encryption scheme for something that needs to be readable in the future like that.
But the examples given won't help and is just bad advice in general.
- Assume for every code obfuscator there's a deobfuscator or at least someone as clever as you out there.
- Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.
- Encrypting the database is an odd one. Your program will also have to decrypt the data to use it. Where do you store the encryption keys? In your code? Don't assume obfuscating your code and/or randomizing variables will protect your encryption keys.
You seem to be thinking in terms of security mechanisms either perfectly blocking attacks or being useless. That's the wrong model. It's about costs. Obfuscating otherwise-open code doesn't mean that nobody can ever figure out what it does, but it raises their costs. Randomizing variables raises costs. Encrypting the DB raises costs on an amortized basis (some cracks may get the key and then it may not raise the cost much, but other cracks may only get the data in which case cost is raised a lot). Things are "secure" not when it's impossible for any actor in the world you don't want to get access to get access, but when the costs to those actors exceed the loss you may experience. (Preferably by a solid margin, for various reasons.)
As to whether this is good or bad advice, that depends on how expensive these things are (e.g., encrypting database fields may be very expensive if you write raw SQL calls as your primary DB interface but may be dirt cheap if you're using an ORM that has it as a built-in feature) and your local threat model (e.g., "dedicated, personalized attackers reading your source" is very different from "does it defeat automated scanners?"). You can't know whether these are good or bad ideas without that additional context.
> You seem to be thinking in terms of security mechanisms either perfectly blocking attacks or being useless. That's the wrong model. It's about costs.
This is something that bothered me quite a bit in Bruce Schneier's various comments on airline security. He repeatedly wrote that profiling young Arab men as likely terrorists was pointless, because if it became harder for young Arab men to get through security, terrorist organizations would simply start sending Japanese grandmothers.
But of course where it's relatively easy to find young men willing to die for a cause, it's much more difficult to find grandmothers who will do the same. And where it's relatively easy for an Islamic group based in the Middle East to connect to Arabic social networks, it's much harder for that group to connect to Japanese networks.
No, obviously it's harder to find two old Korean people than one old Japanese person. Everything you've listed is hundreds, thousands, or millions of times more difficult than the young-Arab-man case.
Suppose you take down a plane with a young Arab man, and then you want to take down a second plane. There is a neverending stream of similar men willing to do the job. If your strategy requires you to use elderly Korean couples, you're done after the first plane -- you'll never find a second one.
This is also absent in the analysis of "security theater." I've often felt the "theater" does in fact have a material impact on target selection. One doesn't need to actually have a methodology that results in better capture of terrorists to deter them to other targets: one just needs a methodology that has plausibility of increasing the risk of failure. The unfalsifiability of "security theater" is actually a feature not a bug: it means there's always a non-zero weight on it's potential risk impact to terrorist considering air travel as a target.
All other things being equal, the opportunity cost will shift towards targets that have less elements akin to "security theater", since it's basically 'money on the table' to de-risk the attack.
So, the real question to ask about "security theater" is not if it has a material impact on human safety with flying, but if its deterrent effect pushes risk to places we'd rather it not go or if the costs of performing it do not outweigh this deterrence benefit. Given the potentially paralyzing effect it would have on the global economy if air travel were covered in a blanket of fear of flying, it's hard to argue that "decentralizing" this risk to other targets is a bad idea.
The problem with heavily focusing on Arabs while paying less attention to other threats is that Arab Islamist terrorists aren't the only problem aviation security needs to deal with.
Focusing most of the security effort on Arabs is a good way to fight the war of 19 years ago, but it leaves the air travel system vulnerable to upstart terrorist movements that see the lack of universal security as an exploitable vulnerability.
For example, there's nothing to say that America's right wing terrorist groups won't decide to switch from shootings and vehicle ramming attacks to attacks on air travel. The TSA ought to be prepared for this, or any other, emerging threat.
You’re only considering one side of the costs. Obfuscation mechanisms also impose a cost on your legitimate users. There’s lots of reasons why you want your users to actually buy in to using your security controls, and annoying controls with highly questionable effectiveness is the best way to kill that buy in. Users will only tolerate so much burden from user facing controls, so you want to make sure all of the controls you impose upon them are actually useful.
The other thing that’s harmful is relying on something to provide security, when it actually can’t. That’s actually going to have a negative impact on your threat model. People will say (they’re even saying it in this thread) that their port knocking or non-standard port usage has cut out the port scanning noise in their logs. But who cares? A properly secured ssh port isn’t going to be cracked by an automated scanning tool. But a poorly secured hidden one will be easily found and cracked by any motivated attacker. You have to implement the proper control anyway, and the obfuscation one ends up providing no benefit while simply annoying your users.
Security by obscurity is dumb, it doesn’t provide any benefit. Security in depth doesn’t mean multiple layers of controls that don’t work add up to one that does. Obscurity is just a way of spending your scarce resources on controls that don’t work, and wastes your scarce command of your users attention on controls that don’t work. So in reality, they’re also always coming at the opportunity cost of controls that actually do.
Having to perform source audits on code with obfuscated variable names added almost no time to the task.
Again, these methods work against not-so-determined attackers. If you as a defender have limited resources, where would you choose to spend it--on defending against unskilled attackers, or attackers that are more likely to cause you damage?
>but when the costs to those actors exceed the loss you may experience.
There are several problems with this logic. First, it kind of presumes that there is a symmetry in the costs for the attacker and defender. Wise defenders will use methods that have high leverage. Also, the attacker doesn't care at all about your costs. They care about what they can get from you--whether it is access to something that you aren't thinking of, or your crown jewels.
Encrypting databases is sometimes required by compliance, but is no defense against a good attack.
Sure, it increases costs for a certain subset of attackers. Instead of sending easily found and trained young Arab men, they have to put more effort into recruitment. However, in return for that, they get far reduced scrutiny.
Therein lies the problem. It is the real-world equivalent of dropping all packets from a country instead of properly analyzing the packets. You'll stop the low-cost automated garbage attacks, but you won't stop a dedicated attacker, even if the attacker is in that country.
> Assume for every code obfuscator there's a deobfuscator or at least someone as clever as you out there.
There is still some information lost in the process:
- "let eigenvector_coefficient = 23" => "let x = 23"
A de-obfuscator isn't going to be able to recover the valuable information contained in the original name. Will it stop a determined attacker? Maybe not, but it would surely slow them down as they now need to spend an order of magnitude longer trying to understand what the code is doing.
> Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.
Believe it or not, nuisances are enough to stop some people. A lot of would be attackers are just cruising for low hanging fruit.
Remember, the goal is to "reduce risk" and not "stop any highly skilled targeted/tailored attack". Because let's face it, even if you are the greatest crypto wizard in the world, you will fall victim to a highly sophisticated attack tailored specifically to you.
>Believe it or not, nuisances are enough to stop some people. A lot of would be attackers are just cruising for low hanging fruit.
It is not "some people" that I worry about. I worry about attackers with a level of skill.
As I noted elsewhere in the thread, I have audited obfuscated code and the obfuscation is only a speed bump. I can only presume that attackers are smarter than I am, and obfuscation is effectively not an issue. And it is not an order of magnitude. This is another example of developer thinking that this form of obscurity is of any real value. Reviewing code will tell you if eigenvector_coefficient is really what it claims to be or something that morphed into something that the developer didn't originally intend.
Also keep in mind that code reviews approach code from a totally different angle than a developer would either developing or during a code walkthrough.
It might make sense in some contexts, but code obfuscation is a great example of where software engineers think it provides security where it provides none.
Developers often have some idealized notion that an attacker is going to need to piece their program logic back together and try to decode the purpose of each obfuscated variable in order to find a hardcoded password/value.
In reality an attacker is just going to dump strings and try them all or simply set a breakpoint just before the important syscall and let your program do the work. Code obfuscation provides little to no value for these common methods, yet we cannot resist the urge to list it as a bullet point in security meetings, leading to a false sense of security.
Exactly. If you're running crypto and think getting rid of variable names is going to stop people; it's not. Any off-the-shelf algorithm is usually easy to recognize to an accomplished reverse engineer with a basic background of what kind of things they're looking for.
I knew nothing about this topic in general, but elsewhere in this thread there was a link to a blog post about obfuscation methods used in a piece of commercial software. One item was a function that detects a breakpoint, obfuscates its boolean return value so you can't tell if it did, and makes the program hang when it does. Pretty neat.
I think your (and my) ignorance of such methods is evidence that they probably are reasonably effective, even though when explained, they're not quantum physics.
Let me give you an example. At a previous job as a devops, of my predecessors frequently used these "techniques, minus the encrypted database but I sure he would have done it if he knew how. So there was some buggy internal app they needed some new features added to and the person who wrote it thought he was clever and obfuscated the code. It took me a whopping 30 minutes churn through his 'clever' obfuscation scheme and the randomized variable were just a nuisance. Honestly his best obfuscation technique was his horrible code that made no sense.
Even OP's advice about running services on non-standard ports isn't sound. Who doesn't run a service scan? Even sites like Shodan do service discovery for you. I'm going to find whatever port your running ssh on if your running it.
> I'm going to find whatever port your running ssh on if your running it.
I still think it's a good idea. With SSH on port 22, ten thousand bots plus an attacker try to hammer it (so says fail2ban). With SSH on port 9278, zero bots plus an attacker try to hammer it. By throwing away the 99.99% of the chaff, you can see the remaining wheat you care about.
Changing SSH ports isn't about saying "yep, we fixed it!" and calling it a day. It's about decreasing the amount of stuff you have to deal with, which is quite useful. It's something you can do in addition to everything else that gives a decent bang for its buck. No, it doesn't keep you out, but it does keep out those thousands of bots crawling around looking for an open 22 to pester.
But new people in the industry shouldn't think that the things recommended in the article should be used as a primary defense and are accepted industry practices. Moving SSH to a new port to reduce false security alerts is one thing, having people read that article and walk away thinking this is how we do things is another. We don't.
I didn't take that away from the article at all. It said:
> So let’s talk about security by obscurity. It’s a bad idea to use it as a single layer of defense. If the attacker passes it, there is nothing else to protect you. But it’s actually would be good to use it as an “additional” layer of defense. Because it has a low implementation cost and it usually works well.
I think it's good to do those things in addition to the other stuff. Obscurity isn't sufficient by itself, but is another layer of defense.
In addition to the stuff you should really being doing? That stuff is hard enough for beginners, without confusing them with speculation like this that goes against best practices and common sense, especially without clearly explaining the pitfalls and real dangers to each of these hypothetical scenarios. Besides, if you're already using industry accepted solutions to security problems and someone manages to gain unauthorized access anyways don't expect any of this amateur crap to offer any real protection at that point.
Huh? How would that work? You have no idea what my port knocking scheme is.
For all you know you have to knock ports 22, 46, 1776, and 8998 to the timing of "shave and a haircut" switching between udp and icmp along the way... Good luck, the entropy you have to overcome is astronomical.
>Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.
Sure it will. Imagine that your old, unpatched Wordpress admin is at /random-gobbledygook instead of /wp-admin. An attacker would have to try to hit random alphanumeric directories of your webserver over and over again, hoping that he stumbles across a specific thing that they can attack. This is completely impractical, unless they're somehow clued in that the URL exists.
It's really about making life difficult for an attacker, so much so that they will simply give up, or find an easier target. That can be achieved by throwing up a series of difficult/obscure barriers, each which makes it less likely you'll be trivially penetrated.
I ran a world-writable off-the-shelf wiki for years. Trivially tweaked the edit url, visible on every page. But that was enough to break automated spam tooling defaults, so the spamming human might get to see a note, pointing out that robots.txt was blocking indexing, so there was really no reason to waste both our times. The dominant threat wasn't the spammer, but their dumb automation.
> Assume for every code obfuscator there's a deobfuscator or at least someone as clever as you out there.
Then it filters out people who are not using a deobfuscator or are less clever than I.
> Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.
Then it will stop incompetent pen testers.
I don't see how your comment refutes the point made. The point is not that it makes your likelihood of attack zero, it just reduces the likelihood via adding more roadblocks.
> - Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.
But it will stop incompetent attackers - of which there are many. In fact, they are the vast majority.
None of those 'obscurity' techniques will stop a targeted attack. That's not their function. But each of them raises the bar. The more hoops, the better.
Isn't there some model where the keys that you use to decrypt the database are acting something like one time codes, and you have to use particular credentials to access the key server? So the attacker would need to be able to stay in the network to be able to actually access the data -- they couldn't just download the entire database and crack it offline. I don't know how that actually is implemented, but just curious on that.. I also wonder how many people put things in their various systems that are obvious attacker trip-mines -- like having some fake button that says like "copy image of database to disk" or something that the actual internal employees are told to never click that button -- but maybe they have even some confluence page that talks about "how to download the database", but is actually a fake entry meant to trip up an attacker in case they get access to your confluence pages as well as your database.. so they click that button and the admins get alerted...
Overloading names is a good code obfuscation strategy, but tricky and best done with code for obvious reasons (unless you like your regular code to present the challenges of BrainF). For instance, depending on your language, you may be able to have a variable, a function, an object, a pointer, a data structure, an index variable, etc. all called just "a".
Making sense out of code obfuscated this way is really* hard for humans, but will compile or interpret just fine so long as your obfuscator obeys the rules of your language. (We started on this at one of my early startups nearly 20 years ago, but didn't get funded soon enough for protecting the IP in our unique JS to matter. It was unique enough that we actually applied for a patent on part of it - drawing a 16 trace live strip chart of data from network sources at better than 4-10 Hz per channel was really hard with the bowsers and computers of 2002!)
The database key should be generated per encrypted database and then stored using something like the OSX keychain. The OS enforces that only a given application can retrieve that key (via application code signing).
Yes, most people can grab some bolt cutters, snip, and bike off. Yet, so many bikes remain unstolen with extremely week locks.
The vast majority of attacks are crimes of opportunity. Hackers aren't generally trying to target a single company or computer for a bot net, they are looking to get as many as possible. Almost any amount of effort above and beyond the typical will cause them to jump past you as a target.
Back to the bike lock analogy. Again, most locks can be bypassed, getting one that requires an edge grinder will almost certainly ensure that your bike won't be stolen (Why steal that bike when there are 20 with simple wire locks?). Add 2 locks and you've got a bike that will almost never be knicked.
Honeypots are fun, but be VERY careful how you deploy them. Ideally they are on a completely separate network on the WAN side of a second firewall. The last thing you want is for someone to find an exploit in your honeypot and use that to gain access to your network.
Security by obscurity is bad. Obscurity alone does not provide much security especially in a cryptographic setting. It can not be relied on as your sole protection.
Security and obscurity: if you make something secure and then obscure information about that system from an attacker that can increase the security. However obscurity is often organizationally expensive and very fragile. A key can be rotated, but changing how something functions is very hard to rotate.
Maybe the test should be: "is my system considered to be secure even without any obscurity?" If the answer is yes, then add obscurity.
For instance, the port 22 example. Suppose you have a bastion host. SSHD running on port 22, root password disabled, passwords disabled (only SSH keys), no other services running, all other ports filtered/closed. It should be fairly secure, even if exposed to the internet, right?
Now you can change the port. Change the SSH banner and hide the version. Add some port knocking. And so on. None of these measures would work by itself, but they will discourage non-targeted attackers.
For a very specific example, look at the classified ciphers used by the US Gov't TLAs. Why are they classified? Because if they are harder to get info about -- literally obscured -- then it's an additional layer of defense.
Or troop movements during war... Sure, the locations can be figured out, but by not broadcasting locations that's more work for the enemy and thus a bit more secure.
Obscurity is absolutely a key piece of security, because it adds the complexity of discovery.
This is true but I think that its not really a binary classification and there is a spectrum from useless and trivial obscurity (base64 encoding some "secret") to actually useful obscurity. After all, you can call password authentication "security through obscurity" since you only need to know the correct sequence of characters and your security relies on that sequence remaining obscure.
Many serious real-world scenarios do use obscurity as an additional layer
It works for the military, for spy agencies, and governments.
If obscurity didn't have any benefit, then the military's latest weapons wouldn't be tested in the Nevada desert, or some remote island; they'd be tested in Illinois, or off the coast of Long Island.
Most programming and IT sayings are grossly misinterpreted. My personal favorite is "premature optimization is the root of all evil," which originally came with a ton of context but today is often misinterpreted as "never worry about performance" resulting in a lot of slow bloated software.
Changing a port adds one bit of entropy. Not being forced to use "admin" as a username adds a whole bunch, but at least one bit. Not being forced to use https://url/admin also adds another bunch, but at least 1 bit.
Of course, if any of these things are known the entropy drops to zero... Just like a private ssh key that gets pwnd.
All too often I see tickets on open source projects asking for changes to allow better obfuscation, which are then denied using the mantra "obscurity is not security".
They all add bits of entropy to a security and/or threat model that maintainers ignore.
All encryption is "security through obscurity". The parameter space is very large. The key is somewhere in it. You have access to the whole space, but no clue as to where the key is. Good luck finding the key.
> Instead it was originally meant as "if your only security is obscurity, it's bad".
Since all security is essentially "through obscurity" somehow, I would simply reframe that into the onion model. Good security is like an onion, it has many layers. When you only have one layer, that's bad security.
I agree with the principle, but I disagree with the article's example of changing the SSH port as an example of obscurity. Lots of people set up SSH servers on multiple ports, especially in the case of relay servers that provide access to multiple machines through one IPv4 address.
A better example of security by obscurity would be to, for example:
* Flip all the SSH bits or XOR it with some long key.
* Encapsulate SSH inside another protocol, such as websockets over HTTP port 80, or embedded inside what look to an outsider as cat pictures being sent over HTTP.
* SSH over TCP over Skype video.
Incidentally, any of these methods work well for confusing China's firewall and keeping the SSH connection alive, and would probably confuse hackers as well for a little while. They could all be implemented in a router box that doesn't affect your actual deployment.
This last year, I found out about knockd and if that isn't some awesome shit, I dunno what is. Yet, there are plenty of articles saying, incorrectly, how it's awful. It is simply another layer of security on top of everything else you have. Like you said, security by obscurity is more about making it fucking slow, irritating, tedious, and without any sense of reward. "Aha! After only a week, I've figured out you're port knocking! Oh shit... wait, you still totally have the server properly locked down. FML." Because after each "obscure" layer there is a "real" layer of security and hopefully those all those real layers buy you the time to detect and prevent the threat.
Also don't forget that relative effort matters too. Consider "The Club" protection for cars - in a lot, the one with The Club is chosen last to break into just due to its relative difficulty. (Weighted against the potential upside, obviously.)
The port knocking itself may actually be the strongest link in the chain, despite it being one of obscurity, if the population of targets in your "value pool" is large enough so that you are always below a sufficient number of others without knocking enabled, since all attackers will bounce to those when they realize they are not knocked.
> Instead it was originally meant as "if your only security is obscurity, it's bad".
no, not really. what it means is: every important sytem has attackers trying to exploit it. finding an exploit is a series of hunches while probing the system as a blackbox, and you need just one; meanwhile a defender has to be methodical enough find them all.
given the differences, obscurity removes the defender ability to systematically analyze the system while on the other hand for an attacker it remains as much of a blackbox as it was before.
It is obviously a misinterpretation of the original idea behind "security by obscurity is bad". Same goes for "goto considered harmful", which is not always true.
Although, Kerckhoffs's principle are a good way of describing how a secure cryptosystem should behave. This is what people should have in mind.
Obscuring will just add some delay as you state, but it might be irrelevant in many situations.
A simple example would be separating usernames and passwords, having an outer and inner password (think Truecrypt/Veracrypt) or even personal quirks. Again, it depends how much the attacker knows, but even today you can still do the classic "hash my master key with site name" for a password that you wouldn't store anywhere.
That's not the only problem with obscurity. It not only obscures flaws from attackers, it also obscures them from you and makes a system hard to maintain. In any complex system, ultimately there will develop chinks in your armor that owe their existence to obscurity hacks that were thought clever at the time.
I know someone who would rather store passwords/api keys in the database encoded in a way that is not clear text but is not encrypted or hashed arguing that its overkill to encrypt.
A lock that keeps an experienced lock-picker out for a few minutes will keep the layperson out indefinitely... Until they grab the bolt cutters. Everything is relative to context.
The only thing I would add is that it also needs to be maintainable - the obscurity should not impede the maintainer's understanding of the implementation.
Sure, security by obscurity slows down bad actors, but in reality it's not by a significant amount. Often the obscurity that you add aren't even where they're looking. You have to go through a certain level of effort to add it the obscurity. That effort is not enough to warrant the insignificant slowdown of the bad actor. You're better off using that effort to improve your real security in other areas. In addition, you're adding complexity that you have to maintain.
It's fine as an additional layer only when the primary layers do not rely on obscurity.
I've seen too many instances where obscurity is used to justify weak primary layers (IE it's fine we're using this single word shared password since we have all these other layers). It can often provide a false sense of security since it looks like a security layer when in reality it often turns out to simply be a minor inconvenience to an experienced attacker.
There's something to the idea of rehabilitating "obscurity", or at least recognizing that "cost" is part of threat models, and you can raise costs for particular attack vectors by degrees instead of "to infinity".
But SSH is a terrible example, because the cost to the defender of simply not having SSH vulnerabilities is the same, or even less, than the cost of obfuscating it with nonstandard ports, "port knocking", or fail2ban, which are all silly ideas.
Just use SSH keys, and disable passwords.
I think maybe it comes down to this: dialing attacker costs up incrementally can make sense if it's the most cost-effective way for a fully-informed defender to improve security. But incremental cost-increasing countermeasures aren't a substitute for sound engineering; you don't get to count "having to learn stuff" as a valid defender cost.
"But SSH is a terrible example, because the cost to the defender of simply not having SSH vulnerabilities is the same, or even less, than the cost of obfuscating it with nonstandard ports, "port knocking", or fail2ban, which are all silly ideas."
I know who I am arguing with here but port knocking is not silly. It's fantastic.
When I say fantastic, I don't mean it solves all of our problems and obviates any other protections ... what I mean is, for almost zero cost[1] it adds a non-zero level of actual protection.
As a lifelong UNIX sysadmin, it is one of the few totally unalloyed security improvements that I have been able to add to my systems. I believe there are sshd vulns extant that you and I don't know about and port knocking allows me to worry less about them.
I also recommend SMS alerts on successful knocks - SMS alerts that you should never see in surprise. This is trivial, by the way, as you can put semicolons in the knock command:
/sbin/ipfw add 01021 allow tcp from %IP% to 10.0.0.10 22 setup ; /usr/local/sbin/timestamped_sms 4155551212 "knock from %IP% - "
[1] knockd on FreeBSD, 10+ years, not one hang or crash.
It solves none of your problems and adds complexity and cost to your defense without corresponding increases to attacker costs.
If you believe there are unknown OpenSSH attacks, you can't coherently believe that port knocking is a real defense, since port knocking doesn't do anything to protect the SSH channel that attacks will be carried out in.
Instead, if you're actually worried about OpenSSH vulnerabilities, you shouldn't be exposing SSH to the public Internet at all. I'm not super worried about OpenSSH server vulnerabilities, but I would never recommend that teams leave SSH exposed; they should just hide that stuff behind WireGuard.
Almost zero complexity and cost. Maybe if you're a bad at sysadmin work it adds cost and complexity.
>defense without corresponding increases to attacker costs.
It adds a _huge_, almost incalculable cost increase to attackers.
>If you believe there are unknown OpenSSH attacks, you can't coherently believe that port knocking is a real defense, since port knocking doesn't do anything to protect the SSH channel that attacks will be carried out in.
Looks like you don't understand the concept of 0-days. Several CVEs we're listed elsewhere. I suggest researching 0-day exploits so you understand how port knocking mitigates them.
Port knocking mitigates 0-days.
>Instead, if you're actually worried about OpenSSH vulnerabilities, you shouldn't be exposing SSH to the public Internet at all.
I don't disagree here, VPN is a great solution. Nonetheless, for some shops simple port-knocking on a bastion host solves, a lot of these issues, and removed the complexity that VPNs add.
>I'm not super worried about OpenSSH server vulnerabilities, but I would never recommend that teams leave SSH exposed; they should just hide that stuff behind WireGuard.
No one is super worried about things like shellshock, heart bleed, etc. until they happen.
Port knocking solved a lot of problems, protects you from zero-days, and makes SSH noise a non-issue (huge signal-to-noise gains).
Why not just block SSH access from the public internet and use a VPN? Trivially easy to setup and more secure than knocking.
All it takes is me somehow being able to listen in on your traffic - not even decrypt it - and now I know the knock sequence. I know that you have SSH listening on that server. I know you are actively doing something on it.
vs. a VPN where... all I know is you are communicating over a VPN. With DPI I might be able to determine what type of traffic you're sending, but not where it is ultimately going.
It doesn't add enough to compensate for its costs, which are commensurate with those of VPNs, which provide drastically more return on the investment. But VPNs don't have a cheering section, because they're so obviously useful that nobody has any incentive to make that banal observation. "Port knocking" is idiosyncratic and widely looked down on by security engineering teams, so there's a contrarian impulse that makes them seem worth discussing.
I'm struggling to walk away with a crystallized view of why port-knocking is bad, though.
I do agree, nobody should be going to sleep at night, relying solely on obscurity as their source of protection. But these commenters are offering it as an additional layer of indirection. They're not touting it as _the_ solution, full stop.
At the most basic level, would you refute the claim that port knocking or alternate ports are adding additional friction for an attacker, or no?
Myself, I would prefer to run a simple, (hopefully) set-and-forget daemon on my server if it really did add an extra layer of obscurity to my secured SSH service.
I guess I just fail to see why it's one against the other.
Foremost, there is an opportunity cost to setting it up. The time you spend setting up port knocking could be spent setting up another form of security. I believe it is a sound argument to say that a VPN provides more security at a similar level effort. No public SSH means an attack cannot know SSH is running on the server from a port scan because it simply isn't listening. It allows you to reduce the attack surface - you can add more and more servers that you need to SSH into, but you are only allowing public access via your VPN - so you have fewer potential ingress points, and can ratchet up your security and auditing commensurately. And if your VPN concentrator is owned, you should have been setting things up so that they did not implicitly trust someone just because they were on the VPN, so you still have all of your usual measures of security in place.
In that case, there's just not much point. You could also enable port knocking, but I don't think it provides much benefit.
That brings us to the next part. Port knocking is a "weird" thing. It's idiosyncratic and not standardly used. Documenting it and understanding it is additional overhead, and it's something you have to manage and worry about on every server that's using it. Additionally, both standard and SPA implementations are vulnerable to man in the middle attacks, though most SPA based implementations will require an active MITM in that blocks the initial packet rather than just replaying a knock sequence. So extra complexity, less secure, and an oddity on the network that you have to have documented and explain to new team members, etc.
If you're a single person managing a single server, well, honestly you're probably fine just turning off password auth. And you can feel free to do port knocking and whatever else. It probably doesn't matter.
It sounds like port knocking and VPNs, while starkly different in approach, have some overlap in their approach to threat mitigation.
Wireguard et al are much better equipped to handle the needs of an organization, while port knocking's value trends to smaller teams, or even individuals.
I wouldn't want to manage knock rotations for 600 employees, for example.
You're asking for my opinion, and that's all I can relate, but here's my ranked ordering of things likely to have RCE vulnerabilities, from least to most secure:
* A Java, Python, or Ruby app server
* OpenVPN
* Stock nginx
----- starts to get really unlikely right here ----
One crude first-order comparison is to look at the relative size of the code. More code is more likely to have more vulnerabilities, to a first-order BOEC metric.
I guess my point is largely: I can set up a VPN in a roughly similar timeframe to setting up port knocking, and it has roughly similar overhead for end user, but the VPN gives me significantly more security while also solving the same issue port knocking does. In that case, why not just set up a VPN instead of port knocking?
I will again agree with you that the VPN is a more robust and more complete protection. You are correct.
I think the reason I continue to prefer (and evangelize) port knocking is that the intersection of (modest) security gain and simplicity/robustness hits a sweet spot for me.
Again, 10+ years in production on many hosts, worldwide, and never so much as a blip. IF knockd were to fail, it would fail in a very boring way. VPNs, on the other hand, are far more complex and they fail in fascinating ways.
I am a sysop turned sysadmin - this is my lifes work. I prefer simple, unixy tools that fail in boring ways :)
Simple systems tend to fail in boring ways. Complex systems tend to fail in interesting ways. Learning more about a complex system, while rewarding in many ways, will not change that identity.
It's been my experience that a system's complexity (basic or complex) is less an intrinsic trait and more a matter of subjective perception, familiarity, and experience. Your experience is clearly different.
My daily bread and butter is VPNs, but I must admit that I think there may be a truth here.
While I fully agree that portknocking doesn’t provide the same layer of protection or flexibility a VPN does - but with the original article in mind: if your reason for deploying a VPN is because you fear to expose unknown bugs in sshd to the Internet the same could be said about every vpn solution.
Therefore portknocking is / (would be*) indeed more elegant because
- it makes no promises to be secure (as in as secure as a VPN)
- one could argue: if you use it you know that portknocking is just an additional security layer - and maybe don’t get lazy as in a VPN
- a misconfiguration or bug or an attacker might expose sshd on your hosts - a misconfigured VPN at least in a somewhat sizeable deployment can lead to countless attack surfaces
Having said that, that only will work if the rest of the sshd security is in check and your password isn’t hunter2
I think get where you're coming from here, but I don't fully agree.
>f your reason for deploying a VPN is because you fear to expose unknown bugs in sshd to the Internet the same could be said about every vpn solution.
Yeah. You might have a VPN zero day - but then you still have to get into the other SSH servers. Two zero days simultaneously active for openssh and your VPN solution? Pretty unlikely, especially public ones. Someone burning two private zero days on you means you're an incredibly high value target and neither of these would suffice as your sole defense to begin with.
The rest of your argument, if I'm understanding it correctly, is that you think people will get more lax with securing SSH on a box only reachable via VPN than if it was reachable by port knocking? It's possible, but I don't know that the evidence really shows that - lots of comments on this article are along the lines of "i set up port knocking and I've never even seen a malicious ssh connection attempt since then!" - no details of the rest of the security measures they've got in place.
And yeah, going from 'I set up a VPN to connect to my web servers via ssh' to 'I have VPN access to a whole network with all sorts of things running on it' is a big step up, but I don't think it's really in the boundaries of this discussion. Port knocking was never going to be a replacement for a larger VPN deployment, and when you're opening up network access to a wider range of things then how you approach things definitely needs to change.
First of all I agree with you, that we should compare solutions in a comparable manner and I went overboard.
So yes, if we want to be fair we have to compare an in-host defense system like portknocking (which has one job: secure sshd) to a in-host vpn setup more alike to like the often mentioned wireguard.
And in this "configuration" I completely agree. I still think it may be more likely for a VPN to expose security critical bugs than a bug in knockd - but as you said this should only allow access to your next layer of defense (namely sshd) and maybe (if you're a really valuable target) a three-letter-agency might throw all their resources at you and are willing to throw every weaponized exploit they have at you - yeah than you're even more correct because than they would have a far easier time just intercepting your port knock sequence and throwing all their quantum computation power against your sshd keys.
> The rest of your argument, if I'm understanding it correctly, is that you think people will get more lax with securing SSH on a box only reachable via VPN
The argument I was trying to make is that while a VPN is in in every way a really good idea (the way we described it here - as an in-host security layer) but I have yet to see it being rolled out in that way.
I come from a more traditional sysadmin setting, and of most sysadmins I worked with would find implementing this "correctly" to tedious and would either
a) terminate the vpn connection at the rack or co-location "border" and shove a bunch of servers down a single VPN connection
b) terminating every servers vpn connection at a single vpn concentration point
Regardless of which, in virtually all cases that I know of never was there any thought given about intra-VPN firewall rules or allowing only certain ports on the VPN. Most of the time you get the servers that are somewhat related, shove them in a subnet expose that subnet via VPN and you're golden.
And so from my practical experience, I would think that a compromised VPN in my reality would be worse than a exploited knockd, but only because it isn't scoped to the same level.
On a sidenote: I'll guess that modern orchestration tools make it pretty easy to roll out knockd and / or wireguard pretty easily in the discussed fashion - it's just I don't get to play with those.
That was a lot of text, just to say I agree with you - but hey, I guess agreeing on something on the internet is somewhat nice so have a great day.
No, a VPN isn't magic in and of itself. And yes, I would suggest wireguard for the simplicity and performance these days.
I do agree with the author of the original article that security should come in layers
Once something is secured with SSH and a VPN, you've got that many more actual layers - you now need a CVE that allows access or credential leak for both the VPN and SSH. (And many of those CVEs don't necessarily allow a random attacker to arbitrarily gain access)
https://news.ycombinator.com/item?id=24446919 has my list of what the bare minimum SSH protections should be for anything where you are storing customer/user data in my opinion, as well as additional best practices that I have employed.
Can a theoretical attacker intercept a port knocking sequence? maybe. Would a script kiddie running a new ssh 0day against the entire internet be able to do this? no.
If it's your private pet server - sure. In larger networks you have to document the access, manage the allowed ports on the network, configure security groups or equivalent on instances, provide alternative steps for people with unusual clients (for example database UI app proxying over SSH), etc. The cost suddenly becomes very non-trivial.
> I believe there are sshd vulns extant that you and I don't know about and port knocking allows me to worry less about them.
That's interesting, that's the first time I've heard a justification for port knocking that actually makes sense to me.
I'm curious for others' thoughts here -- are non-public vulnerabilities something you consciously try to mitigate? So that, for example, using 2 different 8-character passwords that are implemented with different technologies, is therefore fundamentally more secure than a single 16-character password? Precisely so that a vulnerability in one is still protected by the other?
To me this feels like it's really only applicable if you need to protect your data from hostile governments targeting you specifically, who might actually have zero-days they have weaponized.
However, if you're just trying to protect yourself from everyday hackers or even targeted corporate espionage, is unknown vulnerabilities really something that's realistically worth protecting oneself from? (Assuming you're always installing all security patches.)
I agree. I think this comes down to the Mickens Security Threat Model. Your adversaries come in basically two forms: Mossad and Not-Mossad. If your adversary is Mossad, you've already lost; if a governmental actor wants your data badly enough, they'll get it. If your adversary is not-Mossad, they almost certainly don't have access to any secret zero-day exploits; stay up to date on patches and use good passwords and you'll be fine. Port knocking will almost certainly protect you from not-Mossad, assuming your adversary doesn't know that you're using it.
Sure, a small percentage of adversaries are in neither category, and a random hacker dedicated to hitting your specific server may suspect port knocking and could try to circumvent it, but most companies don't have an adversary like that, and even if they do, you've made it harder for them for a small cost.
I love that article, but this comment beautifully illustrates the problem with it, because unless you believe "19 year old with better-than-normal tooling" counts as "Mossad", it has totally screwed up your perception of the threat model.
CVE-2001-0144 - SSH1 CRC-32 compensation attack detector allows remote attackers to execute arbitrary commands on an SSH server or client via an integer overflow
CVE-2008-0166 - OpenSSL 0.9.8c-1 up to versions before 0.9.8g-9 on Debian-based operating systems uses a random number generator that generates predictable numbers, which makes it easier for remote attackers to conduct brute force guessing attacks against cryptographic keys.
I had a machine almost get compromised from the 1st vulnerability ( noexec on /tmp broke their script ).
When the 2nd came out I was using non standard ports and or port knocking. Despite having vulnerable keys I was safe until I could upgrade.
If a SSH RCE 0day was released:
* every "Just use SSH keys, and disable passwords" box sitting on the internet with ssh on port 22 will get compromised within hours.
* The boxes using fail2ban will get compromised within hours.
* The majority of boxes on nonstandard ports would likely be ok, at least for some time.
I think the fact that you had to list the 20-year-old SSH CRC compensator vulnerability to establish the untrustworthiness of SSH is telling; very few pieces of software have OpenSSH's current track record. I would cite the same 2 vulnerabilities to suggest that SSH is as trustworthy as almost any other piece of software you can run.
Having said that: I don't like exposing SSH services either! Which is why I try to keep them behind WireGuard, at least on prod networks that I care about.
In contrast to an actual VPN, port-knocking and (heh) nonstandard SSH ports shield you only from casual attackers; both give a middlebox attacker all the access they need to launch the attack.
Only the 4th point is really true: if you run SSH on a non-standard port but it's otherwise accessible, you'll still see scans on a regular basis.
Port knocking isn't a terrible idea but I generally prefer locking down the networks (or, these days, using AWS SSM / GCP IAP to avoid listening publicly at all) since having something on the internet means you're just one mistake away from problems and need to staff monitoring accordingly.
The other thing to remember here is that we're talking about one general CVE in two decades. Almost any other running service has been much worse so while SSH is important to protect I don't know that I'd make the argument that further pushing that one service is really the best bang for your buck.
> Only the 4th point is really true: if you run SSH on a non-standard port but it's otherwise accessible, you'll still see scans on a regular basis.
Possibly.. It does depend on the port. 222 and 2222 often are scanned with 22. 2200-2299 is probably common now. I was using 2221 for a bit but after a few years that started seeing some auth attempts.
I mostly watched entire /16s, not single hosts.. the scan patterns for a large netblock are very interesting. It takes as much effort to scan the entire internet on port 22 as it does to scan all ports on a /16.. attackers simply do not do that.
The benefit of some of the port knocking systems is that the attack surface is almost nothing and they are easy to audit. I used it a few jobs ago on my management system/bastion host. I couldn't rely on the VPN since I was the one that managed the VPN, so I needed a way to securely login remotely that did not go through the VPN, and did not end up having sshd exposed to the world.
Not in my experience, I would even say that full range port scanning is extremely rare. Botnets (again, in my experience) seem to only be interested in vanilla installations and will test standard ports exclusively. But of course, if you are in charge of some very tempting target (eg a cryptocurrency exchange) your experience will be totally different than mine.
No, safer. It is very well possible to brute-force port knocking or eavesdropping the ports since that information is not encrypted. Is it harder? Of course, a lot, but if you think scanning 65k ports on each host on the internet is reasonable, then evading a port knock is very much, too.
> It is very well possible to brute-force port knocking
It's incredibly unlikely - there is probably more chance of the sun imploding tomorrow. And if you're the type to install port knocking, you've almost certainly also installed something like LFD, which will temporarily block IPs for port scanning.
Also, without inside information, how would you even know that a server was using port knocking?
And there are roughly 1267650600228229401496703205376 port 22's in the IPv6 space - I've substracted a few for reserved and unassigned spaces, but at this scale a few orders of magnitude hardly matter.
Here, for comparison:
281474976710656 - total ports in IPv4 space
18446744073709551616 - 4 port combinations
1267650600228229401496703205376 - my estimation for 22 in IPv6
And if you don't block the knocking when receiving traffic on another port, brute forcing gets quite a bit easier. I mean, it's still unreasonable. But my point is, when we accept 0,001% chance as possible, I don't think we can say that 0,000001% is impossible - just a lot less possible ;)
18446744073709551616 is 65 bits. Let’s simplify, you're trying to guess a number in 2^64 You can’t guess in parallel. Reasonable constraints on the server side (i.e. limit tries on the combination/per hour before suspending ssh for a while) may have been implemented.
I’d say cracking that is… Unfeasible.
That also assumes you know the existence of a server on which there is ssh under an unknown combination of port knocking of length 4.
The actual chances for guessing the 4 port combination are closer to 0.000000000000000000001%, about as likely as winning the lottery three times in a row. If you're trying to brute-force me with those odds, I'll take my chances.
There is a vastly higher chance of there being exploitable bugs in port knocking tooling than there is of there being exploitable bugs in SSH. You are adding extra exposure and gaining nothing.
I use SSH keys, and disabled passwords. However when I was running SSH on port 22, the number of attempts was slowing my machine to a crawl at times.
Moving the port to some obscure random one divided the number of requests from several thousands per hour to a few per day. Definitely an improvement by any measure : suddenly you can analyze the attacks if necessary.
I run fail2ban on top of it, because why not? In case someone would attempt to really target my system, any obstacle is good to take. And who knows what ssh vulnerabilities exist; any protection is good to take.
I gotta wonder - how in the world can you ever get enough failed SSH login attempts to noticeably affect system performance?
I usually have several cloud servers running with a normally secured SSHD running. There's some failed login attempts yeah. I've never seen even 1% CPU usage from them. I doubt even posting my server address on every hacking forum I could find and daring them to try and hack me would result in getting enough failed SSH login attempts to blip my CPU usage. I have no idea how that could even happen, aside from somebody intentionally targeting your server with a really weird attack for whatever reason.
Actually fail2ban takes care of that for me. Anyway, the important part was having my home PC not crawling and its disk filled with failed connection logs because of the deluge of bots requests. Avoiding being DDOS'ed, for short.
It's not really a direct security advantage, so this is mostly off-topic, but changing the default port does greatly reduce log noise, and theoretically could be a bit less taxing for your network connection or CPU if it's a cheap server not intended for publicly hosting services. (If it is then the traffic would be a drop in the bucket compared to regular production traffic, though. And it's admittedly probably a drop in the bucket either way.)
Reducing log clutter alone probably does confer some small indirect benefit, since it's less likely a more sophisticated attempt or successful breach would go unnoticed when inspecting logs. (Assuming there's some SIEM log forwarding or that it's not a situation where an attacker was able to or wise enough to wipe logs.)
I think a lot of the people in this comment thread are missing the point when using the `sshd` example. There is no single infallible way to secure ssh, but there are a lot of things that can be done together to make it pretty darn hard to hack, and most of those countermeasures have some degree of 'obscurity' to them.
Example:
* Use RSA keys instead of passwords -> This will eliminate most risk, except for exploits in sshd itself,
* Change the default port from 22 to something in the 40k+ range, which will keep you from being scanned, and
* Whitelist IP addresses that can connect to port xx on your server -> This will eliminate 95% of remaining risk
* Using a 'clean' bastion server to access other systems via agent-forwarding, preventing malware on admin workstations from being able to propagate over SSH.
So, no you're never going to be 100% secure, that's just unreasonable. But like you said, the cost can be increased to the point that all but the most determinted state sponsored APT groups.
>* Change the default port from 22 to something in the 40k+ range, which will keep you from being scanned, and
I'm replying to these suggestions all over this item because I think it's important, so I apologize if you've since seen this comment elsewhere, but:
This introduces new security risks. Non-privileged users can bind on ports in the 40k+ range and cannot bind on 22. If you restart sshd for a software upgrade or some other reason, or the iptables rules you're using to remap the ports get flushed, the malicious non-privileged user can now bind to the port people were communicating with your sshd on, and if they ignore the host key mismatch, everything they send can be captured by the malicious user.
Older openssh clients have default configurations that can result in the leak of the whole private key, if you use password auth or 2FA they can outright steal those, perhaps their fake sshd will do more than just steal credentials and will actually mimic a shell and let them gain more understanding of how the system ticks, etc.
Is this level of attack something most people are going to run into? No. But neither is an attack more sophisticated than brute force password attempts. It's definitely information people should be keeping in mind when making these sorts of decisions, too.
This is significantly better than just changing the port the daemon listens on, for sure.
There's still public access to SSH, so you're still at risk from a zero day, weak credentials, etc., so I don't think it's quite to ideal levels where you are employing a VPN, disallowing all public access, etc., but at least you're not introducing new potential attack vectors :)
The level of security is cumulative. You do not trust a connection just because it's connected to the VPN. So if your VPN concentrator is compromised via 0day, the only access they get is the same as if things were listening on the public internet.
To gain access to the server via SSH they now need both a way in to the VPN and a way in to SSH, vs. just needing a way in via SSH.
It doesn't do much if someone just gives up the keys for the VPN and SSH, but it would mean that you would need two simultaneous exploits for the VPN and SSH to gain access.
I'm not sure I necessarily understand your argument, so my apologies if I'm off here.
In scenario 1, you do not gate access via VPN. Things are accessible via the public internet.
In scenario 2, you do gate access via VPN. Things are not accessible via the public internet. Someone compromises the VPN. They now have as much access as if there was no VPN and things were accessible to the public internet.
In scenario 2, you are more secure than in scenario 1 until the VPN is compromised. You are then just as secure as you were in scenario 1.
If you are not restricting access to a VPN in the first place, how would compromising a theoretical VPN result in greater access?
In the setups I've seen, once you've connected through VPN you're essentially on the LAN. If you compromise the SSH server, then you're also essentially on the LAN. Yes with the VPN you still have to compromise the server running the SSH service if that's the machine you want access to, but inside the LAN you now have a much greater attack surface.
Of course if the setup is VPN -> firewall -> SSH to make sure only the SSH is exposed through VPN, then I agree you'd be more secure with VPN+SSH.
But without the VPN, you're already the equivalent of on the LAN because all of these services are exposed to the public internet.
In the discussion we're having, we're going from a setup where there is no equivalent to a private network because everything is public, to having a private network that only allows you access to the things that were previously public.
No because I have a firewall in front of the SSH, as mentioned. I would assume a firewall is in front of the VPN as well of course.
So either only SSH is exposed to the public, or only VPN is exposed. Without an additional firewall after the VPN, how is my LAN more protected with the VPN vs SSH?
Your goal is to protect SSH, not the VPN network. The VPN network is just a tool for protecting SSH.
With your configuration, all that needs to exist is an SSH 0 day to gain access to the server. With a VPN, they need that AND a 0 day for the VPN software to gain access to the server.
You can have a more complex setup with a VPN, but that isn't the discussion here - the discussion is securing SSH. If you want to provide VPN access to an array of other services, or as access to a corporate LAN or similar, then that's another conversation that has to involve the specifics of those services and that configuration. It's not what is being recommended here.
Fair enough, guess I was restricting my view to my bubble. For a single server sure defense in depth should work, assuming you're not running the VPN on the same box.
I'd suggest using jump hosts (-J or ProxyJump) rather than agent forwarding to a bastion host. IIRC the latter gives the bastion host access to your keys.
Each security measure has a value and a cost. Keys over passwords provide by far the best value/cost ratio. Using obscure ports or port knocking or whitelisted IPs are relatively clunky mechanisms that are more expensive and obscure your security posture as much to yourself as to adversaries.
This is absolutely true, but in some ways this is more about reducing the number of 'attempted connections' in the sshd log. Meaning, any failed connection that is recorded (and ideally shipped off to a centralized log system) is in some way actionable. Opening up port 22 (with keys) will still create tonnes of alerts from any SIEM.
The other thing to consider is that there could be exploits in OpenSSH itself. There hasn't been a truly critical vulnerability in a very long time, but low severity or non RCE vulnerabilities aren't exactly rare: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=openssh
Neither changing the SSH port nor using IP source address filters constitute serious countermeasures; they complicate systems and offer little return on the investment. Don't bother. If you're worried enough to change the SSH configuration, set up WireGuard.
Heartbleed caused data leakage in the handshake phase of the protocol. I don’t know if SSH was affected, but there’s no reason why a similar exploit won’t be found for SSH in the future, and trivially obscuring your SSH port protects you from 99.9+% of automated attacks, possibly buying you time to patch or mitigate.
I could be wrong, but if you are using public/private keys to authenticate to ssh, then even attacks that can listen in on the connection would be limited. Because the private key is never transmitted, unlike a password.
With heartbleed, a bug in the implementation of the protocol led to the server randomly leaking contents of the server’s memory, which could be anything from private keys to user or system passwords to other confidential information. No passwords or MitM was required. You can read more at heartbleed.com
And it still doesn't matter, because sshd literally never has the private key that allows access. If a server only allows access via SSH key, you could literally have a complete RAM dump of the whole system and not be able to access it.
Though it would be a tragicomic shame if you got caught by a nasty 0-day while the clown up at port 34015 narrowly escaped and earned enough time to patch before pre-mapped host scans begun.
One advantage of putting ssh on a non-standard port is that your logs, which are otherwise filled with automated ssh break-in attempts, now become almost empty. It's much easier to look for other problems when the signal to noise is increased.
I’m not sure there is any such good example though. Every obscurity control I’ve ever seen has imposed costs upon the users, administrators, engineers... but I’ve never seen one that I would rely on to improve security posture in any meaningful way.
I’ve certainly never seen an obscurity control that was worth its opportunity cost. I can think of dozens of actually useful controls where even a marginal improvement in operational performance would be worth more than every conceivable obscurity control combined.
> There's something to the idea of rehabilitating "obscurity", or at least recognizing that "cost" is part of threat models, and you can raise costs for particular attack vectors by degrees instead of "to infinity".
Exactly! Especially when you can create a high cost asymmetry, low-cost for you to add, high cost for the attacker to bypass.
Agree that the SSH examples aren't the best. I would have picked DRM.
I agree that changing the SSH port may not be the best example of a low cost measure, since bypassing is also low cost.
I would like to see a list of suggestions of "low cost" ways to obscure systems that are (relatively) harder to counteract. But I guess as soon as anyone publishes such a list then hackers will start checking for them.
>This just shows how ignorant you (and most) are on the topic of port knocking.
You, uh, do know who you're replying to, right? https://sockpuppet.org/me/ if not - I don't mention this to go "lol he must be right because of who he is", but calling a well respected security researcher with plenty of real world street cred ignorant is a bit much.
>SPA port knocking is cryptographically secure and does not suffer from replay attacks.
SPA port knocking doesn't suffer from passive replay attacks, but it does suffer from block and replay attacks. An active MITM can still get you.
His suggestion hasn't been "if you care about security just don't do port knocking", his suggestion has been "if you care about security just throw up a VPN it'll be more secure and just as much work"
>Wrong. SPA does not suffer from any MITM attacks.
Care to elaborate? Not even fwknop documentation claims to be secure from all mitm attacks:
>Automatic resolution of external IP address via cipherdyne.org/cgi-bin/myip (this is useful when the fwknop client is run from behind a NAT device). Because the external IP address is encrypted within each SPA packet in this mode, Man-in-the-Middle (MITM) attacks where an inline device intercepts an SPA packet and only forwards it from a different IP in an effort to gain access are thwarted.
If I'm MITM'ing you from the same Starbucks or am otherwise behind the same NAT as you, I don't care if you've got the IP encrypted in the packet when I forward it on.
>Not the same amount of work, so no, wrong. If I had a dollar for every billion dollar unicorn that that didn't have a corporate VPN, I'd have a lot of dollars.
There's not enough billion dollar unicorns out there to actually have a lot of dollars, even if 100% of them lacked corporate VPNs :D
Regardless, you don't even need a full on corporate VPN. You can throw up a tiny VM for your VPN in the same private subnet as your servers, only listen on 22 on the private IPs for the servers. You can do this in less than an hour with Wireguard. Super easy.
>Care to elaborate? Not even fwknop documentation claims to be secure from all mitm attacks:
You made the claim. You prove it with documentation.
>If I'm MITM'ing you from the same Starbucks or am otherwise behind the same NAT as you, I don't care if you've got the IP encrypted in the packet when I forward it on.
That is by definition NOT a MITM attack.
>There's not enough billion dollar unicorns out there to actually have a lot of dollars, even if 100% of them lacked corporate VPNs :D
The example is only billion dollar ones. If I include +$10m+ ones, I'd have enough to dollars to buy a new laptop ;D!
>Regardless, you don't even need a full on corporate VPN. You can throw up a tiny VM for your VPN in the same private subnet as your servers, only listen on 22 on the private IPs for the servers. You can do this in less than an hour with Wireguard. Super easy.
You just described a bastion host, and port knocking makes sense on those as well LOL. Wireguard only currently supports UDP, which can and had been a limitation in the past.
>You made the claim. You prove it with documentation.
I... er, did?
>That is by definition NOT a MITM attack.
You're intercepting the packet and blocking it by being in the path.
>You just described a bastion host, and port knocking makes sense on those as well LOL. Wireguard only currently supports UDP, which can and had been a limitation in the past.
Bastion hosts are generally SSH/RDP/VNC type affairs. SSH in to the bastion and then you have access to the other servers. This is actually how I set things up in production environments - the VPN concentrator only allows access to the jumphosts, and then there's extensive logging and auditing there.
I'm not sure why Wireguard only supporting UDP would be a problem - you can pass whatever type of traffic inside of the tunnel.
You... Ugh... Didn't? You claimed that it suffers from MITM attack. You are not able to prove that it suffers from any MITM attack (the docs specifically outline a way to mitigate a specific MITM attack, but do not outline any others). Unless you have a source that states otherwise, you're wrong.
>You're intercepting the packet and blocking it by being in the path.
Wrong, that is by definition not a MITM attack.
>Bastion hosts are generally SSH/RDP/VNC type affairs. SSH in to the bastion and then you have access to the other servers.
Correct, and you set up port knocking for these. Thanks for proving my point.
>This is actually how I set things up in production environments - the VPN concentrator only allows access to the jumphosts, and then there's extensive logging and auditing there.
There should be extensive logging and auditing on the bastion host. Port knocking reduces the noise to effectively 0.
>I'm not sure why Wireguard only supporting UDP would be a problem - you can pass whatever type of traffic inside of the tunnel.
There have been multiple instances where UDP has been block at sites in the past. Looks like you're ignorant to this. Look up why OpenVPN supports TCP.
It seems to me that the article is missing a few of points on what "security by obscurity" means.
From Wikipedia: "reliance [...] on design or implementation secrecy as _the main method_ of providing security [...]"
So, to use the model mentioned in the article, a single slice of cheese. It's not "an additional layer of defense", it's the main one (so you have other... weaker layers? ¯\_(ツ)_/¯)
Second, "reliance on secrecy of design and implementation" is different from "reliance on secrecy of _whatever-else_", because design and implementation are most often either easily discoverable (sure, occasional skids might not scan port 64323 but what about someone who can observe your traffic?) or pretty much guaranteed to be discovered by adveraries with (not even as much as one might think) time and motivation.
Third, some of the examples mentioned (e.g., the decoy cars) are not even security by obscurity, that's called deception.
So, sure, you can do non-standard stuff to make it harder for _some_ not discover your vulnerabilities (ssh non-standard port is actually a good thing given the massive amounts of bots around), but that should never be your only (or your main) layer of defense.
Security by obscurity is not underrated, by definition it's just bad.
This is actually the misunderstanding that the author is talking about:
People commonly misunderstand the concept and assume that obscurity is a bad practice in general, even when used as a secondary layer. It's not uncommon for junior engineers to object to any level of obfuscation or security because they can imagine a scenario where a sufficiently skilled attacker can defeat it, but that's missing the point. Slowing down your adversaries and weeding out the low-effort attacks is still valuable.
> So, sure, you can do non-standard stuff to make it harder for _some_ not discover your vulnerabilities (ssh non-standard port is actually a good thing given the massive amounts of bots around), but that should never be your only (or your main) layer of defense
That's exactly what the author says in the article. I don't see where the author is disagreeing with what you said.
I would go one step further and say that engineers commonly underestimate the volume of low-effort attacks that will pour in at scale. Some of these, such as brute-forcing or DDoS, can be disruptive to users unless you have perfect rate limiting (which you won't at first). Adding layers of obscurity before attackers can authenticate with and interact with core services can dramatically reduce the volume of these low-effort attacks. The skilled attackers tend to be more surgical.
> This is actually the misunderstanding that the author is talking about
I don't think so. It's like when someone confuses encoding and encryption because both "hide the plain text", they're just not the same thing no matter how you choose to view it.
> People commonly misunderstand the concept and assume that obscurity is a bad practice in general, even when used as a secondary layer. It's not uncommon for junior engineers to object to any level of obfuscation or security because they can imagine a scenario where a sufficiently skilled attacker can defeat it, but that's missing the point.
So how about teching these people instead of accomodating the misunderstanding and its unforseen consequences? I say this because the point of spreading "security by obscurity is bad" directly relates indeed to people, who use to think of security as a binary thing (it's either secure or not secure) with the consequent misconception that if there is something in place than it's secure.
> Slowing down your adversaries and weeding out the low-effort attacks is still valuable.
Definitely yes. But that is not security by obscurity. I always liked the safe analogy: it's good if you still need the key to open it even if you have its blueprints, a huge number of models both open and close with their own keys to play with and the time to take all of them apart. I think most people who dealt with math and cryptography get this more among security professionals and engineers.
> That's exactly what the author says in the article. I don't see where the author is disagreeing with what you said.
The part when he based the article ignoring the definition of the subject. If one reads his article with the definition in mind, then the whole article disagrees with this.
I don't mean to be pedantic on definitions but I think there are good reasons for this one that are just being ignored.
I agree with your understanding of security by obscurity. But I think the popular understanding has started to be more along the lines that the author counters. That if any security measure is "obscurity" then don't do it, it's bad.
This is the problem with "sound bite" bits of conventional wisdom. The more it's used and misused, the less it's actually understood. People hear "Security through obscurity is bad" and just interpret those words however they want without listening to the rest of the actual advice being given.
It's not to never use obscurity to your advantage. It's that you need to be aware (and far too many weren't at one time) that you cannot rely on obscurity as a form of defense.
If I have an old, buggy unpatched version of an admin page sitting on an obscure random URL, but I decide I don't need to bother patching it because eh, it works and it's too much effort to patch and what are the odds someone will guess my super secret random URL, then I need to think about why "security through obscurity" is bad.
And that false sense of security is why they say security through obscurity can be worse than no security at all. They key is to promote vigilance and actual hardening of systems and not expend precious time devising ever more elaborate obscurity hoops that cost you more than they cost an attacker in time and effort to defeat, and are in the end ineffective.
That's not to say you should leave ssh listening on port 22, you really shouldn't. It's like the difference between leaving your front door unlocked and leaving it unlocked and putting up a big neon "Open 24 Hours!" sign in the window.
>ssh non-standard port is actually a good thing given the massive amounts of bots around
Except that if you use a port above 1024 (like the author does) you no longer have assurances that it is a privileged user that launched the process. Any non-privileged user on a Linux system can bind to a port higher than 1024, so all it takes is sshd restarting after an update if it's directly listening on a high number port, or iptables rules getting reloaded if they're being used to forward traffic from a high port to 1024 and an attacker can have their own credential collecting service running where you think sshd is, and all it takes is someone ignoring the host key mismatch error to give up your good creds to an attacker, and now they have more access into your infrastructure.
But that's putting the second line of defence (against an on-system attacker) above the first, which is to not get someone on the system in the first place. It definitely makes sense in some setups, but for your general purpose web server a high port is probably the right trade-off.
How many remote code execution exploits have we seen on webs applications? Many thousands.
But you don't need to make this trade off. SSH does not need to be open to the internet to begin with - make it only accessible via VPN. Now you don't have log spam from botnets in your logs, and it won't show up on a port scan that has fingerprinting enabled and scans the whole range, and you won't be as vulnerable if some sort of sshd exploit comes out that allows you to bypass auth.
Oh, don't get me wrong, I fully agree with you. My point was only that if you see it as an either-or-scenario[0], the higher port is probably the better choice.
[0] Might be true for something like a bastion host, which must expose ssh or VPN (which would be the same scenario).
Never thought about this before, but is this a tunable thing in the kernel config? Some way to signal to the OS "only use port ranges above 16382 for unpriv" and move the boundary up?
I don't believe it's an easily changeable tunable with a config flag or sysctl setting, no. You could of course modify the kernel source code, but there's probably lots of unintended side effects to this - your setting is alright for most Linux distros, but what if someone picked one that overlapped with ephemeral port ranges? Or if you're running software that the commonly used port binds to something under 16382 but above 1024 - now you have to reconfigure it, or set up temporary privilege escalation so it can bind to it before going back down, a la httpd.
It's also a bit of a contract with the client - on Linux something below 1024 is ALWAYS privileged so you know you're not connecting to a fake sshd and giving away your password or current 2FA token unless that system has been totally owned. If you modify the kernel you're modifying that contract - are you sure that this system has modified kernel?
That's really my issue with this whole idea (and one of my top level comments goes into this in more depth) - but there's a lot of unintended side effects from these obscurity changes that people don't know about or think about.
Meanwhile there's lots of well understood practices that provide real security that solve both the issue of noisy low effort attacks while also providing real security against determined attackers. VPNs and jumphosts - why should SSH be internet accessible in the first place? Use key based auth, as well as 2FA.
Port knocking is... interesting... in theory, but it increases complexity for the users at a similar level of requiring them to use a VPN or jumphost (or both), but has additional flaws that they don't have - if they have access to sniff your traffic via some means, they can figure out the sequence and now they have completely removed it as a layer of security. They don't even need to be able to decrypt the traffic - just see the destination ports.
Is this level of attack something most of us have to worry about? No. But if for similar levels of effort we can get better security, why would we go for the weaker form, even if it's unlikely we need more? If for some reason you do have a dedicated attacker, you're better prepared. You can also extend it to provide even more security - lock down SSH to all of your production hosts to only the jumpbox(es). Alarm on any attempt production host to production host SSH attempts. Easily audit ingress SSH access on a subset of hosts. etc.
Mentioned in my other comment ( https://news.ycombinator.com/item?id=24447846 ) but at most portreserve makes it a race. It cannot guarantee that an unprivileged user cannot bind to that port.
Kind of. It doesn't actually solve this. At most, it makes it a race.
If you are using portreserve for the port, the portreserve daemon just holds it until your actual application calls portrelease and then binds to it. If all reserved ports are released, the daemon exits. For a service restart, you could add starting up the daemon as part of the process, however, it is still a race to try and bind to that port. If sshd crashes, there is no reason that portreserve would try to bind to that port. If you create a secondary service or script to monitor for sshd running and restarting portreserve if it isn't, then it's once again a race.
This is one of the reasons why I've started to call a good layer of concealment and deception on top of a hard system OPSEC rather than obscurity. It's a less ambiguous term and probably more accurate.
For example, I have no trouble revealing that all our SSHDs at work are ssh-keys only with a strong key policy and periodic reviews, whitelisted accounts, most are firewalled to be VPN only, configs are hardened periodically, some are 2FA secured. All in all, those are good or best practices to follow in order to harden an sshd, so I'm losing little info there.
A pentester under NDA would get more info to be effective, like the whitelists, might get IP whitelisted even. Security audits during pre-sales might some other additional info.
However some details and measures don't need to be known outside our operations team. But they can and have been a royal pain to visitors.
Even if it was technically a password, it's really obscurity because the password was literally available in billions of devices worldwide - it's just hard to read. Then someone figured it out (specifically in WinDVD).
So, when someone says obscurity is fine, it's fun to remind them that all DVDs are cracked because someone thought so. (Not that I think that's a bad thing)
Indeed, the "real-world" examples suffer from two problems:
1) The former is not security by obscurity, the car the president is in is a secret. Any attack on the president is rendered much harder because of the process even if you know what the process is. Just like regular cryptography.
2) This isn't security through obscurity because the attacker learning your system wouldn't help them. Because the attacker is, literally, a bird-brain.
The way I think about this, there are some aphorisms that work as actual design principles, and some that are just used to defend a decision you have already made to someone who doesn't need to understand it.
"There's no such thing as security through obscurity" is an example of the second; you can use it to mean "shut up and stop asking questions about the secure system I designed," but you can't use it to design a secure system or explain why your choices are correct.
The useful design principles behind "no security through obscurity" are just a little more complicated -- they're more like "every secure system must have defined entropy sources (such as keys) that provide a lower bound on the security budget against a hypothetical attacker with knowledge of everything except for the entropy sources." And, "because obscurity does not measurably improve the lower bound on the security budget, it is only a good idea if it also does not raise the chance of implementation errors, does not make it harder to obtain third party reviews of the system, and does not make the security of the system harder to prove." An argument about whether something is security-through-obscurity-in-a-bad-way probably actually wants to be an argument about an underlying design principle along those lines.
I don't exactly begrudge people using "shut up and trust me" phrases in situations where that's needed, but I think they're almost always unhelpful in forums like this.
I was going to make approximately this point. However, I think it's also important to have some of those "shut up and trust me" phrases codified and have them available for the layman via Google. Because sometimes those people demand "proof" or they'll go searching for it themselves and if it's right there to be found and most major sources agree... well, the discussion can then be "Is this just obscurity where security is needed?" AS IT SHOULD BE.
If you get right down to it, passwords are just obscurity. Usernames are just obscurity. In this very thread people are dismissing port knocking while it's functionally equivalent to a password.
I will personally stand by "security through obscurity is not security" forever because that way we can get to the actually interesting question -- what level is needed for this service?
Let's take a simple example from the public Internet -- you want to share something. So you put it on a server with Apache. You add TLS and PFS. You hide it in a folder structure somewhere. You add a single-use token or just htaccess.
Any of those individually would be obscurity, but put together they are most likely more than enough for... well, anyone. So is it still obscurity or actual security? That's a debate for the ages, but I think most people would agree all of those put together are fine-ish, but pick just one method and it's just obscurity.
This whole thread is basically just a philosophical debate where half the people haven't read the article, the other half disagrees with minutiae in the article, the third half disagrees with major points of the article, the 4th half is sharing anecdotes and the 5th half just wants to participate.
All software security comes down to obscurity: it depends on the selection of specific numbers that are known to the authorized parties, but are extremely difficult to guess (i.e. very obscure) to the unauthorized parties.
The extreme of this strategy is to make a successful guess cost more than anyone can possibly pay, for example by using numbers so obscure that all known algorithms for guessing them will take longer than the heat death of the universe to succeed, or something like that.
But while there seems to be solid math to calculate such numbers, implementations are rarely perfect and can easily leave open the possibility of other guesses that cost a lot less than breaking the theoretical limit. (AKA side channels, vulnerabilities, bugs, etc)
Where "security by obscurity" comes in for criticism, it's usually because the implementers have misjudged the amount of obscurity they are actually creating, or misjudged the amount they need (their threat model), or both.
It's easy to make these mistakes because it is hard to create perfect implementations, and hard to know exactly the current and future capabilities of your attackers.
Isn't this the same as saying something like all astronomy is a matter of looking at the right place? Or all math is solving an equation? Or all surgery is cutting the right thing?
If you reduce anything to it's simplest part, it will sound simple.
Security can also be achieved by physical separation or identity based on physical human traits. Now we're getting philosophical, but if the security crew of the data center knows my face, and does not allow other people to enter, would you reduce my face to a value that is "extremely difficult to guess"?
Right. There are three types of authentication factors: something you know, something you have, something you are.
Obscurity and passwords fall under "something you know". Biometrics like what you describe would fall under "something you are". A physical key such as a yubikey or the sim card matching an SMS challenge would be "something you have". Multi-factor authentication is more secure but it doesn't negate the discussion here about hardening the "something you know" factor.
What's interesting is that (mostly impractical at the moment) attacks on biometrics authentication mechanisms end up summarizing that category to also "something you have", rather than "something you are" -- not that it negate its particular utility.
Like the article. Security also needs to be sensitive to usability trade-offs. Make things hard for adversaries, easy for intended users.
For some things, like VPNs, the adversaries are going to be more familiar with the details than the intended users. I often joke that an effective way to crack a VPN would be offer to configure it properly for a user in exchange for ten minutes of unfettered access to the target company; enough users are sufficiently frustrated they would take this bad deal fully knowing what it meant.
This is the whole "shadow IT" that actually results in a lot of security breaches. Look at the recent twitter hack for a great example. Staff were storing login credentials in a slack pinned message because using the right tools were a headache.
One thing I despise is internal systems with self signed certs because setting it up properly is a faff or no one can agree on the latest and greatest way to do it.
Oh cool that’s fine I’ll just click away all the big scary warnings in my browser to access this page. I’m an engineer and know what I’m doing! It’s a super strong key anyway. Oh wait I’ll just send this link to Bob in accounting and tell him to do the very thing we’ve been telling users not to do under pain of ridicule for ages and then he’s now doing that 10 times a day and now all of https is pointless because he knows that ‘it’s probably fine to ignore it because I have to do that at work’...
Part of the blame here should go to web browser developers. Self signed certificates pop up a huge warning while plaintext http connections do not, even though the former is more secure than the latter.
Article misuses/misunderstands the term "security by obscurity", and is attacking a strawman position based on its own definition of what it means.
Security by obscurity refers to a situation where security is dependent on the secrecy of an algorithm (the algorithm not being widely known or peer reviewed) rather than (or in addition to) a secret datum used with that algorithm.
The opposite practice is to use a well-known algorithm and depend only on the secrecy of the inputs to that algorithm.
The layers of security presented in the article do not meet the definition of "security by obscurity".
Even a port number like 64235 is a secret datum, not a secret algorithm. It's not a hard-to-discover secret datum; it is poorly guarded. But that's not what "security through obscurity" means. Using a funny port number is a widely-known system, with an objective benefit: it requires an attacker to take certain steps that are not required with a known port number. The assumption is that the attacker knows that alternative port numbers are being used.
To me this seems like a bit of a strawman argument.
The claim was never that using obscurity is bad and should be avoided. As I first heard it, "Security through obscurity is not security" is saying that if you are relying on obscurity to keep your stuff secure then you aren't doing enough.
I think this is still true and the conclusion of the article agrees
Security by obscurity is not enough by itself. You should always enforce the best practices.
> The claim was never that using obscurity is bad and should be avoided.
Yet. All of these are from HN.
> 3. Since when is obscurity a valid security measure?
> Security through obscurity, not a valid security plan.
> The problem with these "obscurity as a valid security layer" arguments is that there's already obscurity built into these protocols.
> Especially since most people believe "Obscurity" to still be a valid security technique.
> You're just reciting the same tired old rhetoric that security through obscurity is a valid defense mechanism. It's just not.
> I thought the general consensus here is that security by obscurity is bad.
> Obscurity is bad because it makes you _think_ it adds security.
> To maybe give some perspective _why_ security people say that security by obscurity is bad - and especially serving ssh via port 64323: [...]
> I dismissed it as security through (bad) obscurity but is there a valid security reason to do this?
> Compression is not encryption and security by obscurity is bad practice.
> it's understood that security by obscurity is bad.
> Security by obscurity is bad, of course, but in that model it's such a minor factor.
And countless many more. Some of these reference "security by obscurity", which, if you're kind, you can interpret as "security only through obscurity" (though reading in context this mostly doesn't seem to be what is meant), while others dismiss obscurity entirely. You will also regularly find commenters lament this point of view as the "mainstream idiocy".
It helps to remember that nuance is lost over time as recommendations of best practices become memes. It's useful to reiterate the valuable nuance on occasion to be sure people aren't just taking the memes at face value, as you can be sure some amount of people are.
> Obscurity is bad because it makes you _think_ it adds security.
I agree with the OP, but I also fully agree with this point. I've seen people download the fishiest stuff or open anything because "I have an antivirus installed". Now, I don't claim that no AV would be better in all cases, but it is very much a factor.
From Apache docs:
> Also note that disabling the Server: header does nothing at all to make your server more secure. The idea of "security through obscurity" is a myth and leads to a false sense of safety.
It's not a strategy. It's a tactic. And as many others in this discussion have pointed out, people who recommend to use obscurity never tell others to let their whole security strategy hinge on it. It's just a useful tool; a tactic.
Raising the cost of attacks is a good thing, particularly if the cost of doing so is not too great.
However, beware that obscurity is in the eye of the beholder, or more relevantly, in the eye of the attacker. For example, script kiddie attackers may be the ones who in the twitter example only scan the default ports. This is an important element to defend against.
But a seriously skilled attacker isn't going to use script kiddie methods. They will use more complete, likely stealthy attack patterns.
Bear in mind that what you think of obscure may be breakfast for a skilled attacker. If you are serious about defense, then you will be compelled to follow the ninja threat model, which, in part, says The attacker is going to sit on the same network segment as the application. There’s no firewall or filters. There’s a special place in hell reserved for products that require firewalls or filtering to protect themselves against attack.
Focus too much on obscurity and you will fall victim to the fallacy of "defense by presumed motive."
Almost 2 decades ago, I maintained our company's self-hosted web server on FreeBSD/alpha. It ran a simple (thttpd) web sever. I remember looking through the logs, and seeing script-kiddie attack after attack fail due to running thttpd instead of apache, FreeBSD rather than Linux, and alpha rather than x86.
I obviously kept the machine patched and up-to-date, but I think I probably could have left it unpatched, and it still would have been fine.
>However, if you can reduce the risk with zero cost, you should do that.
Zero cost is rarely, rarely true with regards to operations. If you use non-standard ports, you'll have to document that somewhere, or else it becomes tribal knowledge.
If you don't document it, and someone leaves, how do you know how to access your servers? At the very moment you don't know how to SSH in, you've just paid the price. It's no longer zero cost.
If you do document it, you must now take the time to manage the permissions to that document, figure out who needs to know, and then change access as people come and go. All of that requires time, which also has a cost.
Plus all of this also has training costs when you onboard new people.
Zero cost is a real thing with computer science but not operations.
Zero net cost is certainly almost never a thing; but zero incremental cost is often a thing.
To further your alternate-port example: let's say you have some instances running on Google Cloud. GCP already has a big CLI codebase that they get everybody to use, which has a command `gcloud compute ssh` for connecting to instances, which already has tons of magic built into it. It would therefore be pretty easy for GCP to add additional magic — e.g. randomizing the SSH ports of newly-deployed instances, and then publishing those ports as project secrets in a way that the gcloud CLI tool can discover and use in the `gcloud compute ssh` subcommand.
The incremental cost of an approach like this is effectively zero: the DevOps folks didn’t have to build anything new to get this advantage, because they already built all the infrastructure required (i.e. spent the labor-cost you’d be spending) in the process of getting some other, earlier advantages.
In a sense, setting up a platform or infrastructure that's more complex/flexible than what you require at the time, is the opposite of "technical debt." Rather than saving labor now but needing to be paid down with later labor, it requires more labor now, but potentially saves labor later. It’s a bit like paying a retainer fee: you get less than you pay for (or nothing) up front; but in return, you get things "for free" later on. "Tech equity" might be a good term for this — it's what you get when you invest labor into your tech stack.
I don't disagree, but this kind of goal post moving is how zero cost will turn into a buzzword down the line because it'll really mean zero incremental costs when referring to operations.
Plus, zero incremental costs may only apply if you match the situation being presented. If not, you may have real costs associated with implementing obscurity. This evaluation of whether you will get zero incremental costs or not is a cost in itself.
It's just a misappropriation of the term from zero cost abstractions and it bugs me, especially since it's being ported from compiler theory/engineering to operations, two things which rarely have anything to do with one-another.
They'd be way better off coming up with "cost-effective obscurity" ideas, instead of calling this zero cost.
There is a reason the military doesn't paint their tanks bright pink... Armor is important, but if you don't get shot at in the first place, even better.
Security by obscurity is not painting tanks in camo. Security by obscurity is assuming your enemy won't find your tanks because you didn't broadcast on public radio where your tanks are.
That is another appropriate analogy (and it's why the military invests in SIGINT).
To the point though, no one should "assume the tanks won't be found", but it's still worthwhile to do things to make it less likely they will be found.
In military parlance, this is the distinguish between cover and concealment. In practice, good cover often provides some degree of concealment, but on a purely theoretical level, the two are orthogonal.
Didn't Telegram challenge this rule as well?
> * Never roll your own crypto
Afaik, discovered practical vulnerabilities like [1], [2] were patched, and the rest are theoretical, like [3].
> Using Symmetric Encryption in the Database: When you write data to the database, use a function like encryption_algorithm(data,key). Likewise, when you read data, use a function like decryption_algorithm(data,key). If the attacker can read your backend code, obviously he/she can decrypt your database.
I think the author misclassified this method. An actual encryption is not obscurity. It would be, sort of, if the key is stored in code. But when a proper key management is in place, it's a solid approach.
Changing SSH port is far more efficacious at reducing nonsense than the Twitter poll in the article suggests:
> "I ran an experiment with a virtual machine exposed to the internet which had sshd listening on port 22. The server stayed online for one week and then I changed the ssh port to 222. The number of attacks dropped by 98%. Even though this is solely empirical evidence, it’s clear that moving off the standard ssh port reduces your server’s profile."[0]
> "In the time that I gathered 7,025 connection attempts to my SSH daemon on port 22 I received 3 on port 24."[1]
Also, great top comment by 16s[2] in this HN thread, "Why putting SSH on another port than 22 is bad idea".[3]
It's also way less effective than they mentioned because they didn't get hacked either way. So there was 0% difference in effectiveness between port 24 and 22 since the ssh was properly configured.
Security by obscurity only matters if you aren't secure in the first place. It can be a good extra layer of protection, but the worst examples of security mishaps I've seen are because people find the security unnecessarily burdensome and so they bypass it entirely. So in that way obscurity in that case has a real cost.
Even unsuccessful SSH attempts can have operational costs, though. The machine still accepts the TCP connection and does the SSH handshake. If I’m running a server and I’m billed by data usage or by vCPU minutes, I don’t want to waste my allotted resources by making it easy for every half-baked crawler around the world to make connections to my machine. Using a non-default port cuts down those numbers significantly. Sure, a targeted attack won’t be thwarted, but at least the server is not being DDoS’ed anymore.
Risk is not just a formula. Risk is also "formulaic": when you get people used to an idea, they become blind to things outside of that idea, and there in lies the danger.
If your corporate IT group regularly asks users to send in their passwords via e-mail in order to perform some remote maintenance, then the users will be habituated to sending their password to a familiar e-mail address. If someone from outside their company asked them for their password, they would immediately say no. But an e-mail with the right "From: " address, they would quickly fall for. So it becomes easy to trick the users into sending their password to an attacker in some circumstances, because of the assumptions they make.
Security by obscurity is just another form of this: a practice which isn't really secure, but people may think is secure, because it seems to avoid the simplest, most stupid attacks. But literally any action you take could prevent the simplest, most stupid attacks. That doesn't mean that any action you take makes you "more secure".
Hiding a key under a door mat or in a sun visor isn't "more secure" than leaving it in plain view. Anyone who's not a total moron will find it, and if that's your whole security posture, you're screwed.
One downside of security by obscurity is that it makes it harder for whitehat people to spot problems in your code. It is like asking everyone $1000 to look at your source code. It is relatively more likely to deter whitehats since their upside is lower.
While the article does have a point that obscurity can improve defences, I think security is not about defence. Security is about managing risk in a consistent and rational manner. This means that any defense mechanism need an appropriate level of defense against the threat.
Having a network share on the home network that only your household can access and use 2FA for that? Maybe a bit too much. Do you know your organization will be individually targeted by smart and tenacious actors? Changing the SSH port isn't gonna stop them.
I agree with the article that more discussion about what makes something secure is valuable security work. Disregarding defenses at first sight because they "sound obscure" isn't a good argument. But it also doesn't mean that "small things that might stop someone" is a good layer of defense.
And then there is also the cost of adding security layers...
Yes, I scan them all: 53.2%
No, I use default scan: 46.8%
186 votes
Well, I doubt that. My private VMs have had ssh listening on some random port for the last decade, after I was annoyed how the auth.log became an inreadable spam-fest. Now it's been well over a year since the last probe.
Maybe the author's pentesting pals do that on a few public /24 or some intranet (I would, if I was a pentester). But you're average bad guy scanning /8 blocks looking for an easy catch? Maybe with a botnet...
(TBH, I focused on security/crypto during university, but ended up in an other field - so my practical knowledge is limited).
I think the article misses more important attack vector focusing on brute force, instead of human weaknesses.
Obscurity is naturally fragile, vulnerable to social engineering. Social engineering is the real problem. We can filter out brute force easily. It can be fail2ban, or even simplest ip-tables rules like
iptables -A INPUT -j REJECT -p tcp --dport 22 -m state --state NEW -m recent --name TCP_SSH --update --rttl --seconds 600 --hitcount 15 --reject-with icmp-port-unreachable
iptables -A INPUT -j REJECT -p tcp --dport 22 -m state --state NEW -m recent --name TCP_SSH --update --rttl --seconds 60 --hitcount 5 --reject-with icmp-port-unreachable
iptables -A INPUT -j ACCEPT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -m recent --name TCP_SSH --set
If you move from passwords to SSH keys, it increases security not really because number of possible keys is larger than number of possible passwords. More important is that you eliminate bad practice. One cannot share SSH key over phone conversation, write it down on a piece of paper and stick it to a monitor. Change is nothing similar to upgrading from 1024 bit SSH keys to 3072 bit SSH keys. If you store SSH key on some HSM, like YubiKey, even better, no one can copy this key, only steal.
You cannot really hide IP address or port number. You'll send this information to your colleagues and partners over SMS, Facebook Messenger, Whatsapp, Viber, Telegram, E-mail, Skype, Zoom, many times, multiple channels. Or you will write it down on a Wiki, like Confluence, which is public to entire organization, and that knowledge is not a secret anymore.
My greatest fear is not a script kiddo with botnet, but an addict employee with debts.
Unless you are Goldman Sachs, the NSA or someone else who is being specifically targeted this is always true.
Otherwise, switching ports or making your systems a bit different to the others on the internet means the majority of hackers - 'bots', 'scanners' and 'script kiddies' will move on to easier targets.
On the internet you don't have to outrun the bear, you just need to run faster than some other guy.
I was thinking maybe this goes back to people in security looking at things in a different way than others? Like ITSec folks spend all day everyday reading about every possible way a bad actor can make bad things happen. They look at something like changing the port SSH listens on and think about all the ways the best & brightest bad actors will get around that in no time at all. Everything ends up looking pretty useless at some point, because you end up seeing that it's possible to get around nearly everything.
Another example might be folks in the security community saying that SMS 2fa is no good because all it takes is someone taking over your phone account to get around. Sure, that happens, but not all that often, and usually happens to people with something that's worth time & focus by talented bad actors.
"Security by obscurity is not enough by itself. You should always enforce the best practices. However, if you can reduce the risk with zero cost, you should do that. Obscurity is a good layer of security."
Security by obscurity is bad regardless of other controls because it does little to reduce probability of attack and nothing for severity. It is only barely helpful at reducing the probability of attack because it is ineffective against various forms of automated footprinting. That is just the attacker.
Security controls impact everybody though. Not only does it make the problem obscure to an attacker it also makes the problem obscure to non-attackers. This dramatically increases risks because it impacts the application and distribution of other security controls.
Since it’s barely helpful where intended and harmful where it’s unintended security by obscurity only increases risks.
The analogy to software is the belief that hiding source code makes it safer. Hidden source code is not any safer but the vulnerabilities are a bit harder to find. The benefit of open source is that the vulnerabilities are exposed to anybody who reads the code which allows more vulnerabilities to be exposed and patched.
Ehh the majority of companies practice security by obscurity as an extra layer.
There's the idea in security that an attacker knowing your algorithm/practices shouldn't mean anything yet you rarely see companies detail the security measures they take on internal systems because we know keeping this secret has no downside.
Security policies at most companies are often generic and not secret. Reporting chains for emergency remediation and asset identification are secret because those identities are potential attack vectors. Information sensitivity of that nature means it must be protected from disclosure and not that it should otherwise be hidden. The key phrase for sensitivity management is: need to know.
To maybe give some perspective _why_ security people say that security by obscurity is bad - and especially serving ssh via port 64323:
Typically you want to know who is connecting to what server via what service and log these connections. If something is off, an alert can be generated. If ssh isn't served on a standardized port, logging and alerting becomes more complicated - albeit not impossible.
There is more housekeeping to do. In case of a handoff, things like this need to be documented. If all services work on their default port, there is no need for documenting them.
In the case of compromise, it becomes very hard to identify how a machine got compromised.
Yes, a lot of people do not do a full port scan. But those are not the people exploiting risky vulnerabilities. Security by obscurity reduces your risk, but only to a certain extent. Having a proper patch management or firewall management in place reduces your risk a lot more.
A lot of owls do get killed by humans, despite their camouflage.
> Typically you want to know who is connecting to what server via what service and log these connections. If something is off, an alert can be generated. If ssh isn't served on a standardized port, logging and alerting becomes more complicated - albeit not impossible.
Could you elaborate on that? I serve ssh on a non-standard port precisely in part because it drastically cut down on the noise of failed log-ins, to the point where when I check the logs I'm almost the only one who actually bothers to try to log in via that port. That seems like a win to me.
Not the OP but I assume what they mean is that if you have network-wide monitoring across a network with lots of servers then it won't be able to easily make sense of what is happening if servers are all using non-standard ports for things.
I bought a new virtual machine and I waited about a day until I logged in. The SSH logs showed over 100 failed login attempts. I hadn't even logged in!
I changed the default SSH port to a random high number.
I had zero failed login attempts in 2 months.
Of course use strong security methods but I suggest changing the default port numbers just to clean up the log files.
Please do not use random variable names in source code. Uglify/minify instead. It’s a bit unclear because right above that “tip” is obfuscating code, did I miss something?
For JS, does it not make sense to use random variable names in production? Obviously you don't want to while developing, but it seems like an efficient method to help obfuscate.
Obscurity is double sided. While the attacker can be hindered by it, so is someone who can audit the defense and find its deficiencies. I always thought that was the main argument for avoiding security by obscurity - the benefit of better audit and improving defense overall outweighs the benefit of obscuring it.
No mention of port knocking for SSH. I used to be scanned constantly for SSH logins. So I changed the port. The login attempts stopped for awhile, but eventually they found the port. Now with port knocking, I haven't seen a single attempt.
Security by obscurity alone is bad, but as another layer, it can be great.
If you don't need to support access from arbitrary IPs, IP whitelisting is another good additional layer over SSH. Keeps the SSH scanners out, and also significantly raises the bar for a determined attacker.
> Security by obscurity alone is bad, but as another layer, it can be great.
I beg to differ in your case.
Had you left SSH on its default port, what would your expected time-to-compromise be? Presumably you weren't using a root:password credential, or else your system would not have remained up enough long enough for you to implement any obscurity.
But if an attacker, with full ability to try logins, could not reasonably guess your login credential in the lifetime of the universe (i.e. public key SSH or a strong password), then you've not improved security by moving to a port-knocking model.
You have reduced nuisance, but nuisance isn't part of the standard threat model for SSH security.
To put it another way: you've not seen another unauthorized login attempt, but would you be comfortable relying on that and use root:password as your access credential?
I disagree. Suppose the latest SSH has a 0-day, now I am vulnerable, even though I only use PK-auth. Obscurity is just another layer, and the purpose of layers is to help make rare vulnerabilities (like 0-days) not compromise the system. By hiding the door, they cannot even touch the 0-day without another rare vulnerability.
Good, thought-provoking article, but use with caution.
I've seen security through obscurity misused too often as the only line of defence, or as a "temporary" stop-gap that outlives its usefulness. It can lead to a false sense of security.
Such measures also do not tend to keep up with changes as attacks become more sophisticated or cheaper to carry out.
You also need to make sure that there are no unintended consequences - does your non-standard configuration make it harder to apply upgrades? Does your own penetration testing also scan all ports, or is it only going to discover weak servers running on port 22 on your network?
That said, I would do the type of things mentioned in the article as an "added bonus", but try to exclude them from my overall security evaluation (either rough mental model or formal threat model).
> I've seen security through obscurity misused too often as the only line of defence, or as a "temporary" stop-gap that outlives its usefulness. It can lead to a false sense of security.
Exactly.
> we need for some reason make this development-stage service publicly available on the internet, and it's connected to a lot of our services internally, but we can implement AAA only later and we need it now. No issue publishing it on an obscure route/IP/domain name?
So the "developer" who created an "educational" ransomware project that was abused for half a decade by criminal groups now has a controversial and low-level view of security practices and is broadcasting it to the world. I'm shocked, I tell you. Shocked!
I've used some very tight-arsed VPS providers at the low range (128MB/1IPV4/$12 a year) and some of them mention a high load, and it's mainly due to brute force on port 22.
It makes sense to change port purely to avoid the low-barrier noise but of course it isn't much better security. Port knocking is on the same lines.
I'm by no means a security expert but these measures would surely help: less opportunists = less opportunities.
Saying that, I'm public key auth only and disabling any public facing service I'm not using.
The "security through obscurity" thing seems like a warning to avoid shortcuts rather than some implementations that help reduce noise. As long as you understand the fundamental problem of security, the obscurity thing is just a sidebar.
(1) Don't use this as your only method of defense. The further from your LOCK the better the key needs to be.
(2) Use your security in the layers that offer the most benefit.
(3) Be proactive in your defense.
-Changing your SSH port will stop that largest number of attempts on your service.
-A non-default port PLUS port-knocking PLUS key-only PLUS whitelisted IPs PLUS whitelisted login names. Is better then only one of those.
-Apply very liberal firewall rules to prevent any unauthorized from IP's that should have access to your service. Country-code level blocklists are a thing.
Being obscure is about NOT being where your opponent expects to find you.
Obscurity has been used for detection of bots engaged in profitable ad fraud, by having web clients execute a JS payload whose behavior can be profiled. Temporary payload obscurity enables a silent alarm, which can be used to stop financial payouts.
Unlike many approaches to cybercrime defense, obscurity-enhanced bot detection has lead to both prosecution and extradition of accused attackers, https://www.cyberscoop.com/tag/methbot/
Security by obscurity was the perfect way to send the largest diamond across the world:
"Due to its immense value, detectives were assigned to a steamboat that was rumoured to be carrying the stone, and a parcel was ceremoniously locked in the captain's safe and guarded on the entire journey. It was a diversionary tactic – the stone on that ship was fake, meant to attract those who would be interested in stealing it. Cullinan was sent to the United Kingdom in a plain box via registered post." - https://en.wikipedia.org/wiki/Cullinan_Diamond
Technically a strong password is a diversionary tactic where incorrect passwords could be considered diversions. The difference between what is generally regard as "obscurity" and "security" is orders of magnitude in the number of options one must explore in order to break-in.
I think the article confuses two concepts. "Security by obscurity is bad" usually applies to things like "our own proprietary hash function", or "our own proprietary remote control protocol", or sometimes even just "well, noone has the code, right?". I call this obscurity proper. This is often little more than an omission.
All the positive examples in the article mean actively obscuring the view of an attacker. This is an addition of things like camouflage or distraction.
Proper obscurity is obviously bad, because there simply is no security concept behind it. Additive obscurity is obviously smart because it adds to an existing concept.
It has its place; the key thing to remember about it is it's not sustainable.
Security by, say, mathematically-hard problems stays secure even when the problem's design is understood. Security by obscurity breaks any time the secret gets out.
(There is an overlap point where a math problem is too simple to solve and, meanwhile, an obscure secret is "The sixteen digit number the President memorized to launch the nukes" where the security-by-obscurity can even beat out mathematically-secure, but the middle points of those two sets are separate and the reliability heavily tilted in favor of the mathematical cryptography).
I run a cyber software company with a product basically structured on this principle (deception.)
The swiss cheese picture is excellent, since that is how stuff happens in the network internals, with lateral movement and other internal activities.
Say an adversary has to jump 5-10 hops from initial point to target system and you can, with very lightweight obfuscation and ”obscurification”, increase the attackers mistake rate dramatically - it makes a ton of sense from a risk and economic perspective.
Consider the alternative (succesful internal hardening and monitoring) which is way out of scope resourcewise for most.
My beard is not grey enough to know about how the Security by Obscurity meme/mantra have evolved in the tech community.
But one thought I have is that this more nuanced picture is much more complicated to tell beginners. Beginners / not as security conscious developers often, wrongly, assume that obfuscation is much more powerful than it is.
The safest digest of the "Apply ≈Kerckhoff's principle, but some obscurity on top of that is not a bad idea if it's cheap to implment", is probably "security by obscurity is bad!1!".
Certificate pinning or sandboxing in mobile apps won't stop people from reverse engineering your API's. But if your personal belief is that it's like almost a state actor level attack to see your API routes or modify requests, it will undoubtedly influence how you implement them.
I've seen some serious problems where companies do really bad things (like sending the ID of the currently logged in user and not checking it server side, allowing for execution as arbitrary user), which I guess at least partly arose from thinking along the lines of "It's only our signed code that will ever make these requests". Even bad developers wouldn't make the same misstakes in ≈2020 on the web, where the understanding that the client is untrustworthy have fully statured the common understanding.
Do we feel that honeypots might actually be one form of this? I.e. something that isn't deterministically going to prevent an attack like a OTP cryptographic scheme, but that may trick a large majority of attackers into thinking they are actually in a secure production system for a long time.
I know they are used primarily for detection, but why not go the extra step and make a honeypot that is a truly believable facsimile of a real corporate environment so the attacker wants to stay around even longer. There are lots of clever ways you can switch network traffic to make it look like you are talking to one host when in reality you are talking to a VM jail under a security administrator's desk. Load these environments up with fake, but believable data. How would an attacker know if they are in production actual, or fake prod? Once you "acquire" an attacker, you could even monitor their approach and string them along with hopes of getting into SVRSQLPROD (which is obviously going to be loaded with fake bullshit, but they wont know until they find the symmetric encryption key which you will probably never give them).
Again, I think we are all clear that the above is not deterministic security and that certain experienced attackers (or insiders) may be able to smell such a honeypot from a mile away.
Preface: May be biased, as I run a honeypot company.
To your question: absolutely. They are also a very economically effective and (done right) an easy-to-implement solution with very low-to-none risk of jeopardizing any legitimate traffic.
To your why not do this - we are working on this right now. There's a lot of interest to what you described in above average maturity security teams; we have a few customers in this niche helping us design the "attacker playpen." You are right in that it is a challenge to make believeable enough without introducing risk into the environment.
I want to tackle a misunderstanding I have seen from some posters in this thread about passwords/secrets/keys. Using a password should not be considered a form of "obscure defense".
If you are using a password there is a mathematical definition of how hard it is to crack, the number of bits of entropy contained in the password. If you use a password manager like KeePass it will tell you the number of bits in you password.
If it takes me 2^100 guesses for a 50% chance to discover your password then that is not obscurity, that is a valid defense mechanism. That the password itself is obscure is not a reason to call the strategy obscure.
Passwords and keys are used to create an artifact that will unlock access to a whole bunch of information. Instead of protecting each piece of information individually, we can now focus our efforts on protecting the password instead.
With a password we have managed to make the process of protecting information simpler, less obscure.
Sorry to discuss something a bit off-topic from the article, but I figured I had seen the "passwords are obscure" argument so many times here and that this could be a valuable opportunity to teach something about security.
I have a super secret password to my bank account! It's super hard to guess, and there's 12 factor authentication. You have to get my cat's paw print to sign in.
When the truth is: There is no bank account, password, or cat. And you are actually a homeless, broke, dog lover.
If you want to keep something secure, don't brag about how secure it is. Don't talk about it at all.
In Applied Cryptography, Schneier says obscurity is "take a letter, lock it in a safe, hide the safe somewhere in New York".
Somehow in my mind, cryptography, eg. RSA, is also obscurity then. But instead of obscuring the physical coordinates in the set of coordinates of New York, we obscure the location of the private key in the set of prime numbers.
I'd argue that all security is security by obscurity, it is just a question of how many attacker-seconds it takes to break.
Obscurity means keeping something private that if the attacker knew they could access your service. Traditionally security by obscurity is something like putting your ssh login port on port 61329 rather than port 22.
I'd argue that the above is 16 bits of obscurity, whereas your ssh key you log in with is 1024 bits of obscurity. The attacker needs that 16 bits of port number obscurity and the 1024 bits of ssh key obscurity to log in.
However the attacker-seconds to break the 16 bit port number is a rounding error compared to the attacker-seconds to break the 1024 bit ssh key
Which is where, I guess, the idea that "security through obscurity" is bad came from.
I'd argue that the attacker-seconds is still higher with your ssh on port 61329 though, so why not use that too.
Ik like this taken on it. But the math isnt complete. You need to account for each attack path taken.
For example: a generic SSH vulnerability means somebody is going to make a botnet to check every port 22, leaving your 1024 bits useless and the 16 bits worth more.
This has always been obviously true. The only sense in which the (original) phrase has any meaning is with respect to cryptographic primitives. And even then, all a cipher is really doing is "obscuring" data. I've never really heard anyone other than Steven Gibson subscribe to the silly phrase.
I think that most initially looked down on the security through obscurity layer as flawed only because many people, a few years ago, thought that this is the only layer they needed. Somehow over time many people began to think that this layer is not useful at all. A little bit of thinking about the issue would absolve one of the approach to completely exclude that layer.
Also it is so easy and cheap to implement is another reason to discard it because in many organizations today people think that security can only be had by spending boatloads of money on people and software. Surely a simple, fast and cheap solution can't really reduce risk now can it?
And then on top of those things people do not think about, nor fully understand risk, hence the whole Covid 19 scare and the many other things in the world people are afraid of.
This article is so odd in its conclusion. The problem lies in obscurity generating a mess thats hard to reason around by designers, hiding obvious weakpoints, making it hard to find the most valuable area to work on. Changing from a default port should not really count as this since its easily configurable and does not impact neither the design nor usability (unless it does). Port scanning avoidance also fills an actual function in terms of load issues.
Rather what seem to be the issue is what security by obscurity actually means and can be missinterpreted as.
Layering lots of obscurity and putting time on that when it can be spent to increase security in the actual weakpoints or/and hiding them for developers/maintainers are problematic to say the least.
I like the article overall and agree with the author. Only thing that stuck out to me was when he puts out the twitter poll which shows that a majority of people do scan the entire port range, it defeats the purpose of saying that most people stick to default scans.
Yes, let's say you have random people over you don't know (to fix things for example). One of them is a criminal that wants to case your house to see if it's worth bteaking into when you're out of town. If you're with them the whole time they are working, they won't be able to just look behind the painting and your house doesn't become a target. That's assuming of course that the painting isn't too valuable.
But that scenario now takes us firmly outside the realms of what we’re discussing here. You’re actively monitoring the actions of people that you know are going to be there.
For an Internet facing server, it would be more like an art gallery where you have hidden the safe behind one painting, and the gallery is open 24/7 with no security to stop people from looking behind paintings, where you know a good portion of visitors are going to do so. You can see an identifier for each visitor that looks behind paintings, but many visitors are doing so and the person that comes in to crack the safe might not have been the person who found it.
Why would you think it's in milliseconds? Are you generalising on port scanning?
You could say that uuid as urls are obscurity, but it's not that fast to break through (and then hopefully you get some kind of protection layer)
There's also an additional monetary cost for the attacker, it adds up
>Why would you think it's in milliseconds? Are you generalising on port scanning?
I am, since that seems to be the primary example used in the original article and the main example being discussed in the comments.
I think that's fair, since obscurity does the most to help against non-persistent threats. Persistent threats have targeted you for some reason, and that targeting means they are willing to employ more resources to get through your security. In that situation you should be focusing on adding as many layers of security that you know have protections besides just 'not being known' as you can, especially when there are tools out there that are specific in helping make them known.
The higher level of abstraction behind an article like this is that security is a mitigation activity within a broader risk management plan. Most of the time, the best practices in the security field are best practices for good reasons - they costs are a reasonable to mitigate the business risk of not having security, so we do them.
But there are times when you just need to discourage people, not truly secure a site. Not many, but they do exist. Pseudo-security in those cases is cheap and meets the business needs. Likewise, there are times when best practices aren't good enough, and you need to go beyond the norm.
Either extreme is driven by thinking through the acceptable risks, evaluating costs, and making a decision.
Obscurity takes many forms, and some are useful, particularly to common hackers, and others are completely useless.
There are a number of code walking tools and disassemblers that will rename all variables and function linkages and provide annotated source. Someone still has to crawl through that source, but all the obfuscation is removed. I know professionals that use these tools, not to hack, but to perform security audits on products that they use within highly secure settings.
I attended a security conference a few years ago, cannot think of the speakers name right now, but he said you should assume that professional hackers have better tools and larger budgets than you will ever have access to.
Just as a side note, moving your ssh server port from 22 to whatever may make your server unreachable under strict firewalls. If you are allowed to connect to some whitelisted ports then it's highly unlikely that port 64323 will be allowed.
Security by obscurity is harder to reason about. Heavily obfuscated code is always more insecure once the obfuscation is reversed. The other big difference is you can incrementally break it; you are not going to be privy to if someone is selling your deobfuscated code online. Lastly, the example of randomising the presidential car i’d likely not call security by obscurity. Security by obscurity does not mean that there are secrets inherent to the protocol (or private keys would be obscurity!) if it was the same car every time, and they relied on nobody telling anybody, that’d be security by obscurity for me, otherwise it’s just a random value with a low keyspace
I can't tell you how many times I've seen someone say doing something for obscurity shouldn't even be done because of the adage. Some of these adages are really harmful.
InfoSec has very much gotten a bunch of these statements that can't be argued with, and people won't even take you seriously if you point out the flaws with password managers or PKI.
I've definitely used methods in my own code which "aren't trustworthy" from a security doctrine standpoint, but proven near 100% effective, on their own. Just because a state actor won't be phased by it doesn't mean it isn't a strategy that'll prevent 99% of automated attacks.
The problem of "security by obscurity" is that it assumes that the whole security is obtained via obscurity, whereas "more security obtained by obscurity" is good as it assumes that obscurity is used as defense-in-depth.
I had a Mac SE with an ethernet card in 2001, around when Red Worm was loose. We put an HTTPd server on it, and watched as each RedWorm attempt came in. This ancient super slow machine not only stayed completely safe despite being on Mac OS 7.5 or something around there, while Windows machines of current vintage were being taken apart around the world. Added bonus, the slow speed of the SE meant it took about 10 times longer for each Red Worm attempt to give up, so we at least monopolized some tiny portion of those infected systems for a bit, keeping them from infesting others for a few more seconds...
I am a strong believer in the Swiss cheese model[1] of risk mitigation. I first learnt about this through pilot training and apply it throughout my professional life now.
A lot of comments here are saying "don't do x, do y." or "x and y are useless, you should just do z".
The Swiss cheese model helps you visualise that adding and recognising having many layers of defense carry value and should be recognised as such.
One thing I didn't see discussed in the article was the balance between the benefits of security by obscurity, and the benefits of having your code open source (or at least making your security methods known) so more people can audit it. Personally I don't actually think there is that much security benefit to having open source code since most people don't audit random codebases for fun, but that is one of the arguments I've heard against obscurity. Of course some methods of obscurity can still be done with open source code as well.
Open sourcing has to be done with the audience in mind. It generally doesn’t make sense to (publicly) open source a system that is idiosyncratic to a single organization. The only likely interested audience is hostile attackers. A useful general purpose dev tool though? Sure, and the people using it might be able to help.
Sure, it adds a little bit of “security” like the camouflage on tank example, or the prey jumping. Those examples are poor however, because — unlike changing default ports, adding knocking, obfuscating code - these examples are extremely low maintenance/cheap, and come with essentially no downside.
What’s missing here is the discussion of tradeoffs. I fail to see how e.g., requiring port knocking adds enough security to justify the annoyance. Changing the default port, maybe, but given how easily it’s detected anyway, the cons still outweigh the pros IMO.
Ultimately if governments (such as Australia, US) continue to prevent citizens from using encryption, we will have no choice but to employ security-by-obscurity atop secure-by-design principles to have privacy.
For example, making ciphertext look like cleartext[1], or hiding text in images[2].
In DC there is this concept of "the blob" which is basically shorthand for "The Washington consensus that isn't verifiable, but that most people parrot since to hold an opposing view doesn't really get you anything because even if you're right, nobody will remember. All that they'll remember is that you're that weird guy that looks at stuff with a strange perspective and that you may be too dense to social signal that you're in the blob."
I've noticed the same thing with software developers.
- "Client side encryption in JavaScript is useless!" Until CloudBleed came out and the only company that was safe was a password manager that used it. To thwart client side encryption you need to actually modify the contents of the JS payload, which is detectable. But no matter how much evidence I give that this tactic works and is actually used in production and that it actually stops attacks, programmers just don't care.
- "Don't do security by obscurity!" But then we all implement passwords (which is just security by obscurity) and the best people in intelligence don't have LinkedIn accounts. Anyone can join an OSINT forum and see the actual tools that get used. Security by obscurity works for many, many actors.
There are many, many little bits of stuff like this. Think to yourself: How many times has code that you've written lead to an RCE vuln that was exploited. Personally, I can only give a lower bound, and that lower bound is zero because I'm extremely careful, but I don't pretend that it's never happened. Anyone that is familiar with data science or economics or political economy understands that when a signal is dampened the response is dampened.
That is not the definition at all. If I kept track of my passwords by making them the first letter of each page of various books on my bookshelves, that would be security through obscurity, because, while if someone who broke into my home knew about my system, it would be quite easy to extract all my passwords, but if they didn't know about my system, they would be looking forever for a notebook with cryptic letters in it, or sticky notes on the monitor, etc.
I feel like it's a pretty weak point to say that passwords and keys would be security by obscurity if we didn't carve out a special exception for them. Why do they get a special exception? Because they're really really hard to guess, not because they're fundamentally different.
Let me give a real-life example of a good non-password, non-key piece of secret information that's used for authentication. If you need to recover a WoW account that you've lost access to the customer service reps will ask you to tell them the names of the characters on the account. Your account name isn't secret, and your character names aren't secret. But the relationship is because they aren't ever publicly connected. The odds that someone other than the account owner having this information is low and the odds of guessing it by chance is impossible.
But does that make them different or are they just things that are easy to verify? If you could calculate the entropy of another authentication scheme would it be included?
The danger of security by obscurity is that your system might not have as much entropy as you initially estimate and can be easily defeated. Sounds a lot like the vulnerabilities in normal crypo applications, right?
> "Don't do security by obscurity!" But then we all implement passwords (which is just security by obscurity)
Thank you for being someone else who recognizes this. If having a secret piece of knowledge grants you access to the system then it's obscurity. Once we admit that then we can start talking about the difficulty of guessing that knowledge as the actual important factor.
> If having a [password] grants you access to the system then it's obscurity.
Perhaps if you're hiding it under the doormat in front of the door and expecting it won't be found. Or writing your PIN on your ATM card with a sharpie and leaving it on the dash of your car with the doors unlocked and your windows down.
In most cases, however, it's part of an overall system that involves authentication, authorization, and (hopefully!) accounting to protect access to resources.
I don't buy the poll, at all. When I move SSH off port 22, I don't get 50 percent traffic. I get 0 percent. It's the first thing I do to harden any server.
I think there's two different things we're talking about here, one is hiding things and the other is using obscure ways of doing things.
Changing your SSH port is the former.
Using NETRJS instead of SSH is the later.
Hiding things is just a good security practice in general (just understand hackers have access to port scanners) so that kind of "security through obscurity" isn't a bad idea. Using obscure protocols is something quite different, and really shouldn't be in the same category.
Moving sshd to a non-standard port is certainly a good example. If only because it turns down the log noise. So that an "unauthorized connection" stands out.
The bad rep is probably due to too many people having used Obscurity as an excuse to hide the fact the actual security was, at best, an after though for their product.
"if you can reduce risk probabiloty with zero cost..." since when is obscurity zero cost? You've seen how much denuvo and vmprotect cost? Can you name any free free code obfuscator besides proguard that actually works? Can you name one that supports golang or rust?
Security trough obscurity is considered bad because it's not zero-cost, and the investment you put into it might rather go into actual security
Theoretically, yes. But if it makes you get off the radar of some malicious attacker who is capable of exploiting you, then the mission is accomplished.
This came up in the stackoverflow podcast where Reddit founders were the guest. They mentioned that they stored plain text passwords initially which is fundamentally a bad design but at the same time it helps to block spam. If a user starts to create a lot of accounts programmatically they generally use the same password thus much easier to filter. Security via Obscurity, if you can do it, can be very very effective.
It's possible to track similar passwords in a safe way over a short amount of time. Keeping any user's password or any security token in plaintext is just disgusting. Regular people have a hard enough time navigating the digital world without needing to worry about whether their credentials are safe. Anyone who stores passwords in plaintext is disgusting and should not be managing digital services. Period. This is how breaches happen, and it should be a fine-able offense.
You could do that without a plain-text password, though with a salt it would be harder (though you could still do it proactively by checking the password when the account is made).
To check the new account's password at creation time against existing salted hashes, you'd have to hash the new password with each existing password's salt. If you are using something like bcrypt or scrypt which is designed to be slow at this, that might take a while if you have a lot of existing accounts.
Maybe a Bloom filter approach? Besides the salted slow hash you store of each password, also put the password in a Bloom filter. Check new passwords against the Bloom filter. You'll get some false positives that way, but maybe that is acceptable.
I'm seriously tempted, if I ever have to implement a password system again, to allow up to 256 characters, just store an unsalted SHA256 hash, and tell people on signup that they should be using a password manager with a long random password if they care about security of their account.
I practice security through obscurity every day. For example I don't flash large amounts of money when out in public. The notion that security through obscurity isn't security is and always has been monumentally stupid. In some sense sure cover is better than concealment, but in the real world 100% concealment is better than 100% cover since in the former case one won't be taking fire at all.
the problem with that (and security through obscurity in general) is that if your opponent attacks at random, they still have a chance of getting you. having lived in a bad neighborhood I can tell you that you don't have to flash money, or even look like you have any money to get into trouble (someone chooses you at random for a gang imitation, crackhead wants your shoes, or someone is just straight up crazy). it's always better to have as much cover is reasonable, and beyond that concealment doesn't hurt.
In my opinion, the article misses the point: Security by obscurity might be beneficial, but it is by no means as strong as real security. So the problem is, that people who take the obscurity road, might not care so much about the rest.
Security by obscurity is simply a completely different class, comparable with dollars and cents. So if you care about security would you rather focus on the dollars or on the cents?
A field where obfuscation is very common is commercial video games where they are now up to the point of using a virtual machine that generates an instruction set randomly at compile time to obfuscate some part of the code. These games are still cracked almost on release day.
It makes the barrier to entry MUCH, MUCH higher. First you have to unpack a binary, THEN you have to fixup any custom VM call that it makes. Basically only incredibly specialized people/groups will be able to do this.
Contrasting to games of the 2000s the barrier has been raised significantly for hackers
Most PC game sales occur in the first 30 days after release. Protecting even a few days adds significant value to the publisher.
Some protection methods do last a long time. StarForce 3.0 protected SplinterCell took 422 days to crack. Not long ago cracking Denuevo protected games took around 75 days [1,2]. I'm not aware what the current best protections are or how long such protected games take to crack.
Security by obscurity has a bad reputation because it should never be used in place of a proper secure solution whenever possible.
Most security experts will argue in adding layers of defence where the proper solution is not possible.
There are other considerations for obfuscation as well. A risks assessment might consider the skill of the attacker and the resources required (eg: computational power) in order to break in.
These days you have to assume hackers have read the engineering new hire guide that you wrote up. The SSH port will probably be in there.
A big tech company will have tens of thousands of current and former employees. Those employees may try to break in, and all the easily accessible internal wikis or other common resources are going to end up on some hacking forum somewhere eventually.
I agree. If I run my SSH and my VPN on non-standard ports, the number of probes I get a day goes from hundreds to one or two.
If I change the admin page of a Wordpress corporate site URL to something other than the standard wp-login.php the number of scripts that try to crack it each day goes from a thousand to zero.
It's _very_ effective, along with other precautions for locking these things down.
One point I haven't seen brought up yet is that anywhere from 25% to 35% of data breaches are related to an internal actor. Your obscurity will do nothing in those cases, because the internal actor will actually know about the obscurity. That being said there is a place for obscurity in security, it just has to be traded off with the usability issues.
In many cases it is much better to fake (or spoof) information, rather than try to hide it. Browsers come into mind. Instead of trying to hide information about yourself, which would make you unique, just give them false information that is common, and that way you blend in with the rest of the people, everyone is content, and nothing true has been revealed of you.
Agree with most people here that security by obscurity is bad "by itself".
For example, changing your public server ssh port from 22, to say, 2942, is a great way to limit the amount of bot autoattempts from trying to log into your server. Having a password-less ssh port 2942 open is clearly bad, but not when combined with all the standard good practice ssh security.
My understanding is that it's important to keep SSH (and other services) on a privileged port. (I think the default is <1024.) Otherwise, unprivileged malware that could cause the SSH server to crash and take over the (unprivileged) port. No idea if this actually happens in practice though.
Security by obscurity naturally transform security into a probability. When you use given hard opaque rules (e.g. TLS + X Y Z) you stop thinking in depth.
Instead when you think about layers of obscurity, you go much deeper affecting the probability at each layer (host, port, etc.)
In reality, at a different conceptual level, things like TLS are also bundles of obscurities.
"Roll your own crypto" may not be entirely bad either.
Suppose that you encrypt your message using "my own crypto". The result is ciphertext that looks like a random bit sring. Then encrypt the ciphertext using a standard algorithm such as AES.
An attacker will have difficulty since a successful decryption of AES is hard to recognize as such.
Or you could spend that extra compute on just using a longer AES key.
(Or if you distrust AES itself, make your second layer some other well thought through encryption scheme.)
Security by obscurity is dumb to count as part of your main toolbox of tactics. However it can be icing on the cake, like moving ports around and turning off ssh root login at all. Lowering your footprint never hurt. Sometimes you just have to run a little bit faster than the other guy if you're being chased by a bear.
Yes, making security breaches harder for attackers at zero cost is obviously good. But obscurity does not have zero cost if it makes the system less efficient to operate. Having multiple cars in a presidential convoy is inefficient; using non-standard ports adds complexity; obfuscating data makes debugging harder; etc.
One problem with “obscurity” is that - by design - as few people are aware of the detail as possible (otherwise it’s not obscure. That’s both it’s strength and it’s weakness. With far fewer eyeballs on it, it’s easy to think you’ve gotten it right, when in all likelihood you’ve probably gotten it wrong (somewhere).
Security by Obscurity is great. This is a takeaway from the port knocking conversation here last week. If there's a zero day exploit in sshd, I'd rather it be behind some layer you would have to "know" to get in, rather than sitting open to the world. Why make your target bigger than it needs to be?
By definition cryptography is security through obscurity as you map something from a small space into a large space (e.g. multiplying Two numbers you know is easy while finding two prime factors is hard).
The aphorism is unfortunately too short to be enlightening except to someone who already understands it.
For production use cases in a business environment it's just not as good as the alternatives. Setting up VPNs and blocking SSH access to the internet is just as easy, less overhead, and more secure.
Let's say you're a bank. You implement port knocking as your security measure of choice to keep SSH secure.
I know from your job postings that your developers and IT department work at X location in Y city. I know the IP range of your public facing servers. I go to the closest coffee shops to that office, and drop off some wifi sniffers that record and send me all the communication to that IP range. All it takes is one of your employees doing some work on one of those servers from the coffee shop, and now I know your port knock sequence. Layer defeated.
Or, you use a VPN. I do the same thing. All I see is VPN traffic. Maybe I can identify what type of traffic with DPI (unlikely to have the horsepower on the type of device you'd be leaving unattended at/near coffee shops for extended periods of time), but no real details on where that traffic is ultimately headed, and I have no ingress point given up to me. The VPN can also be used to secure other types of traffic beyond SSH.
There's just not really any reason to use port knocking over a VPN when the difference in overhead and complexity is minimal and the benefits you get with a VPN vs. port knocking are so massive.
There's just not really any reason to use port knocking over a VPN when the difference in overhead and complexity is minimal and the benefits you get with a VPN vs. port knocking are so massive.
But I would say in addition to a VPN, not instead of.
Imagine the vpn client had it built in to do the knock, before connecting. So the VPN ports are not opened until the knock is performed and only for the source IP that did them.
I mean, you could - it's not going to make things less secure to add port knocking - but I don't know that it makes things significantly more secure. The chances of there being public 0day exploits for a VPN and SSHD at the same time is pretty much zero, especially if you add in additional layers such as 2FA (which I would recommend doing)
If someone is burning multiple private 0day exploits to target you then they are attackers at the level where port knocking is not likely to foil them either.
(And to be clear, I just think that port knocking is a bit silly - not that it's totally ineffective, and it's not what I would call security through obscurity, since it is effectively another auth factor with a simple PIN)
I'm guess because VPN's and SSH became so ubiquitous? Having a single point of entry, especially when combined with 2FA, is probably good enough for most situations.
It's generally more convoluted than the equivalent alternatives, with less features. You need a shared secret - you could use totp - or you could use vpn/ssh with key based auth - both offer authentication of/to both sides and encryption/confidentiality.
Blocking all icmp traffic breaks tcp - so true "invisibility" isn't that grät either.
port knocking is just, another way of doing a password, right? It's pretty much just a PIN code with a slightly obscure method of inputting the digits.
I would imagine that this is open for a man in the middle attack- if this traffic were intercepted- you'd be able to see port numbers, right?
if this traffic were intercepted- you'd be able to see port numbers, right?
Sure but I hope the service you're opening up with the knock is actually secure like ssh.
The idea is just that you cant portscan to find something to attack. Its basically the same reasoning behind using non standard ports, but takes it a bit further.
The non-standard port is of trivial value, but it's practically zero cost- that's why it's used.
Port knocking doesn't have that benefit- you're establishing another secret that has to be maintained and accessed- but unlike key based auth or passwords- that secret is insecure in transit and unwieldy to use.
If you want to add another layer and manage another secret- why not just add another layer of the lower-friction and more secure methods we already use to establish secret-based auth?
I'm just saying that port knocking just results in another secret you have to manage. It's just adding another locked door. Why use a locked door that's a pain in the ass and insecure in transit?
If you want two layers of secret-based auth- why would you make your second layer one that's objectively less secure and more unwieldy than other layers?
I seem to recall a story about how a remote server would open a port only after a few unsuccessful calls in sequence to a pre-determined set of ports right before.
I've always wanted to set up something like this - but it seems like a pain to remember everytime i want to connect..
Security is not just about preventing attacks, but also about detection and response.
In detection, honeypots are very useful, like a machine named gitlab.yourcompany.com in your internal network, but which just alerts SOC about login attempts. That's pretty much obscurity.
I mean crypto is based on security by obscurity if you think about it. It's just REALLY obscure.
You can technically compute the private key for someone's Bitcoin wallet for example. It's just you'd hit the end of the heat death of the universe by then.
Totally! Security by obscurity is awsome!!!! Had a software running for over 10 years for many users. Never a hack because the backend is some very obscure framework on jvm. Just upgrade the JDK and everything still runs fine after over 10 years.
Security by obscurity is dangerous. You can eliminate any obscurity with reverse engineering, spying, etc. Obscurity makes your system prone to catastrophic collapse and loss of security. Security by open design is necessary but even insufficient.
yes agree with this whole heartedly - have been using security by obscurity in my servers, home and office for 30+ years probably since I learned about it in the Linux Bible or something like that.
I don't think anyone assumes when people say "security by obscurity" they mean "only" it is a great layer to add in addition to.
In my home for instance, the entire back of my house is hidden with 10 foot trees so when people drive by the road, they don't see my house. Now I've got a deadbolt, alarm, cameras, a dog, and a gun to add to my layers but having those trees there is a nice feature.
Just a tip, since I notice the author included it in their poll: "nmap -p0-65535" can be (almost) abbreviated to "nmap -p-". That excludes port 0 but is otherwise identical.
The article is correct but they miss the true value add of security through obscurity: signaling lower ROI to attackers. Security through obscurity generally forces attackers to perform more actions and do more recon. Every additional action taken increases the risk of detection by defenders, costs the attackers valuable time (meaning lower ROI), and makes the target less appealing relative to other targets. Security through obscurity tactics are absolutely useful tools in a defender's toolbox (in conjunction with other security countermeasures).
I think it depends on what system you are talking about and where it is. So if you have an internet-facing server running sshd on port 22 then you are going to get hammered with low-effort, automated scans and changing to non-standard port can cut down on noise and at least "hides" you from low-effort attackers. But if your server is in a hardened, private subnet then any attacker that is even in a position to connect port 22 has already bypassed multiple layers of security and is already invested so likely won't be in the least bit deterred by a non-standard port.
>The article is correct but they miss the true value add of security through obscurity: signaling lower ROI to attackers. Security through obscurity generally forces attackers to perform more actions and do more recon.
Exactly. I came here to say much the same thing, except with a non-digital analogy.
I live in a (relatively) small apartment building with five floors and four apartments on each floor.
I live on a floor that isn't the top or the bottom floor. That reduces the likelihood that someone who opportunistically gains access to the building entrance or the roof will attempt to access my apartment.
What's more, unless I'm being specifically targeted (which obviates any sort of obscurity argument, given that specific focus is given to a target rather than a service exposed by many), it's pretty unlikely that my apartment will be robbed since there are other, much more accessible apartments than mine.
That's the "security through obscurity" bit, which has a measurable, positive impact on the security of my home and belongings.
However, that doesn't mean locking my door is inappropriate or overkill.
In fact, if I am being specifically targeted, locking my door is likely not sufficient either, as an intruder could bash down my door or drill out the locks to gain entry.
I suppose I could install surveillance cameras focused on my front door (as well as inside my apartment), allowing me to identify intruders after the fact. And I could install safes to hold my valuables as well.
Each of those security precautions have some positive value, and absolutely contribute to the idea of "defense-in-depth".
That said, there is a real trade-off between increased security, cost and usability.
While the relative "obscurity" of my apartment confers some security value, it's not nearly enough to stop someone from trying all the doors in the building, so I lock my door. I don't, however, have safes in my home or surveillance cameras outside the door and in every room, as that (unless I'm being specifically targeted) doesn't add enough value to justify the cost of such measures.
Which brings us to the point of security -- which is to protect assets. However, if the cost expended (in resources and usability) is greater than the value of the asset(s) being protected, it doesn't make sense to do so.
Security through obscurity can (but doesn't always) provide a modicum of value, but isn't a complete solution itself. Used in conjunction with other, reasonable (in the context of cost vs. value being protected) measures, it can be a valuable tool.
Indeed, overly broad denunciations of 'security by obscurity' come up as a point of confusion often on HN, and this post provides a good, coherent summary of a proper response.
A defense mechanism that only partially mitigates an attack vector could be considered 'security through obscurity' if deployed on its own, but that same mechanism could be considered 'defense in depth' if deployed alongside other defense layers as part of a more comprehensive security model.
Passwords and keys are not "obscurity", they are mathematically difficult to break if done correctly. There is no mathematical guarantee for security by obscurity.
In fact, there may be obscurity in the encryption method. You won't find a decryption method for my encryption algorithm on the internet. You have to do crypto analysis.
My algorithm is reorganized caesar with block encryption
This article could benefit from a definition of "security by obscurity."
Every crypto system is based on obscurity of one kind or another. That private key, password, or token is just an obscure form of information that may yield, eventually, to a brute force attack. Or not. It's really hard to know for sure.
Fun article! I'd add that malware packers are a good example of an obscurity layer that's typically effective in practice.
But food for thought: in the general case you can't reliability predict the efficacy of an obscurity mechanism, so you never know if it's an actual layer of defense or a placebo.
Ppl that say obscurity is bad must be saying look here try hacking my site, it’s unhackable and secure. Everyone knows it’s good, it just make you look weak to say obscurity is useful. Ppl are scared to admit how useful it is in security.
If the security-by-obscurity means binaries that you don't have the sourcecode to running on your systems, not only is that such a 'security method' not underrated, it's downright dangerous.
> In this post, I will raise my objection against the idea of “Security by obscurity is bad”.
I think this article's fundamental flaw is that it conflates the concepts of obscurity and a secret.
To start with, a definition: a system is secure if an attacker has no reasonable chance of unauthorized access over a relevant period unless they are in possession of necessary secrets.
SSH with public-key authentication is secure by this definition, since the (remote) attacker has no realistic chance of guessing the proper secret key within a human lifetime and there is no better-than-chance way to obtain the secret key. Likewise, a strong, high-entropy password is impractical to guess.
Running on nonstandard ports, however? It doesn't add practical security because guessing is so trivial. The author's twitter reach had a 50/50 split on whether they scanned all ports for pen-testing, so that implies that using a nonstandard port increases the time-to-compromise from either (lifetime of the universe) or (about an hour) to twice that, depending on whether the second (real) layer of security is vulnerable. In neither case does the obscure port provide meaningful protection.
Some activities like port-knocking can add security, but only if the practitioner thinks of the knocking as a secret from the start. That requires:
* Limiting who has knowledge of the secret (i.e.: a port knocking routine known only to you is secret; one distributed in a public client for access to a production service is not),
* Having plans in place to change the secret if it is ever compromised (DeCSS) or found to be flawed, and
* Ideally ensuring that the secret cannot be guessed / confirmed independently of other secrets.
Other suggestions in the article ignore this difference:
* Database encryption requires an attacker to possess two secrets for extraction (internal access to the database plus the key) rather than just one. It's not obscurity.
* Randomizing variable names or obfuscating code is not a secret because an interested attacker can reverse the obfuscation with ordinary human levels of effort. The confidence here is strictly false, since it "secures" against low-effort attackers and not high-effort ones. The "secret" is distributed publicly, so it is no secret at all.
* The convoy example is again a secret; the point is that a would-be attacker does not know which car contains the target and has no reasonable ability to guess with better-than-chance success.
Obscurity goes from marginally effective (or outright ineffective) to counterproductive when its implementation makes it harder for the designer to reason about their own system. Someone who rolls their totally unique cryptosystem is relying in part on algorithmic obscurity for their security, but in doing so they give up on established (and battle-tested) best practices in favour of their own limited analysis.
Ultimately, the "Swiss cheese" model of security is a poor analogy because a big number for a human is a small number for a computer. To take the convoy analogy again, a would-be attacker is only going to get one shot, but a computer can try billions.
To me all these slogans around security is to ensure people really, truly, actually think about things before they go against the grain. Is using obscurity as part of your defence always wrong? No, but equally it often adds a false sense of security. Popularising these easy to remember slogans helps change peoples defaults. Nowadays, if someone sees an attempt at security by obscurity it (hopefully) rings alarm bells and causes them to interrogate it to ensure that there is also other security measures in place, or that it is otherwise OK. It's the same with "never roll your own crypto"
I find it somewhat interesting that the article uses an example which falls right into another pitfall that "security vs obscurity" is trying to prevent.
> SSH runs in port 64323 and my credentials are utku:123456. What is the likelihood of being compromised?
>
> Now we changed the default port number. Does it help? Firstly, we’ve eliminated the global brute forcers again since they scan only the common ports. ... So, if you switch your port from 22 to 64323, you will eliminate some of them. You will reduce the likelihood and risk.
This is technically correct. However, the author has identified a security concern that he wants to mitigate: brute force attacks. Now, you could try and reduce that risk by using a different port which might reduce it by 50%, or* you could fix this issue by deploying fail2ban (or using ssh keys, or VPNs and bastion boxes, etc), and thus negating that attack vector entirely. There isn't even a usability argument here: making people remember the right port for ssh is less usable than setting up fail2ban. Of course there are tonnes of other attack vectors to consider, but in general where possible its better to "properly" (fsvo.) mitigate those concerns and only rely on obscurity where that isn't possible. If a concern is mitigated than adding obscurity does almost nothing, while likely proving to be more annoying to the end user (like in the case of specifying a port in the above example).
Now of course that's not to say that you should never use obscurity, but if you do then I think its entirely reasonable that you are prepared to give a good justification why its appropriate. For example, sharing via secret URLs is a good example where it can be easy to justify in some settings, but it equally may not be OK for documents that are really really sensitive as its relatively easy for links to be shared in error with the wrong people.
RE some comments about using obscurity to signal that your deployments would be harder to get into and thus for attackers to not bother: I'd genuinely love to know if that is true or not, I wouldn't be surprised if attackers assumed obfuscation mean that the more advance security measures hadn't been deployed (otherwise why bother with obfuscation?).
* Based on the twitter poll in TFA, though if you have a targeted attack it seems sensible to assume that if port 22 doesn't work they'd try again with other methods
The reality is security by obscurity CAN work, but only if three critical elements are met:
The first is to know when the obscurity has failed
The second is to be able to quickly change the obscured component (e.g. a password)
The third element is the hardest, security by obscurity only really works if you can survive exposure of the obscured data/system
Which leads to the fourth element: security through obscurity IS NOT security through secrecy (so either I'm a liar or bad at counting, leave your vote in the comments below)
Let's start with the 4th element, obscurity vs secrecy. Passwords. Passwords generally only work if they are secret. Some password systems like Kerberos take great pains to ensure passwords remain secret, for example by NOT sending the password itself to a remote system, but by sending proof that the user has the password (grossly simplified, but generally correct, now you understand Kerberos!). Secrecy involves hiding things that if exposed will be a problem that can't be solved, like your password, the formula to Coca-Cola and so on.
Obscurity won't work for things that need to be secret. Obscurity won't work for things that once exposed result in the game ending.
Even when an item can be obscured, it is still important to know when it is no longer obscured, otherwise you now have an element of your security system that has effectively been breached. For example if you are using randomized port numbers to prevent SSH scanners from constantly trying default username/password combinations and someone (like shodan.io) port scans you and publishes your SSH server ports you either need to change that, or not be relying on that obscurity for your security (e.g. I used to change my SSH port #'s just to reduce logging activity and make it easier to filter/read logs for actually malicious activity).
The second element is that once your obscurity becomes known you need to be able to change it, if you can't change your SSH port # (because you don't have a way to tell clients where it is) then you have a security control that cannot be recovered and you lose it. Security elements should always strive for long term survivability because the simple fact is attackers get to try more than once.
The third and final element (because we started counting at 4!) is your system cannot simply fail because the obscure element was discovered. Using a no standard port for SSH works if you also use strong passwords or (ideally) key based login. Obscuring SSH ports and leaving a default admin:1234 login is brittle, and as evidenced by scanners like shodan.io easily exploited.
I think, honestly, the best use case for "security by obscurity" is to cut down on the noise of logs and casual scanning/scripted hacking, which can be valuable, having less chaff to sort through for actual attacks can both save time and money, but also give you a better chance of finding the real attacks.
This article misses the point and makes a bunch of arguments that fall apart on anything more than the surface level... much like security through obscurity.
We'll look at the SSH port example.
What does changing the port get you? You no longer get hit by the automated sweeping that hits basically all internet accessible IPs. Cool! So you had root/password or root/apple or whatever, you were going to get owned by the automated scans, good, you're now more secure. But you shouldn't have been using a weak password to begin with, and now there is a very real risk that you think you are more secure than you were previously.
He compares this to animals that have natural camouflage, or the President switching cars in his convoy. But there's no guarantee that you will see an owl in a tree, or be able to determine which car the President is in. But you can scan every port on an IP and find sshd listening with fingerprinting. The cost there is basically zero for someone that wants to attack you. If all I had to do was wait a few seconds longer to make sure I saw an owl or know what car the President is in, then they would not be effective either. You cannot compare situations where there are specific limitations and use them as proof positive for a situation where those limitations don't exist.
And this is important: His recommendation to run sshd on 64323 is also just actively making you LESS secure. Ports under 1024 are privileged on Linux - you must have superuser privileges or otherwise be granted access to bind to these ports. No such protections exist for 64323. Now, let's say you have a user level compromise on that server - they start a process monitoring to see if sshd ever restarts/crashes/otherwise stops listening on the port it was listening on, and as soon as it does, they start their own malicious sshd replacement. Now all it takes is someone ignoring the host key mismatch to give away their credentials, which the attack can then likely use to penetrate further into your environment.
The author doesn't fully understand the add on effects of his suggestion here, and as a result one of his security by obscurity tips makes you less secure to a focused attacker. Meanwhile, if you REALLY wanted to secure SSH you do not make it listen on internet accessible IPs, and only have it available when you VPN in and access it via a jumphost while using key based auth with 2FA on top of it.
When you start obfuscating your code, using random variable names, and generally making it harder to read, how much more likely are you to introduce bugs than you would with a clean code base? Are human introduced bugs more likely to be a security risk than variable names being random? Than code being obfuscated?
I don't disagree with encrypting the database, but I also don't consider encryption or password/key protecting something security by obscurity.
Basically: Obscuring things helps prevent you from low effort attackers that should not be scary to you to begin with. They do nothing to minimal amounts to protect you from dedicated attackers, and potentially introduce new risks that allow easier access from dedicated attackers. The sort of security measures you should be implementing to stop dedicated attackers will already eliminate the risk from the low effort attackers.
NONE of the arguments in this article are new, and they have all been argued against quite extensively in the past.
Security through obscurity is not fool proof, but neither is cryptography in general. It's all about making the problem as difficult as reasonably possible. Someone could guess a private key by incredible luck according to Murphy's law and while the probability of that happening is so extremely improbable, it is not zero, and therefore not foolproof.
The author doesn’t understand the phrase “security by obscurity” and doesn’t know why we use that. He took the normally used phrase literally and ran with it.
The phrase is used to suggest developers shouldn’t think that obscuring something provides security. We don’t say not to obscure stuff. In fact all the examples in the articles are from what’s already used in products. So the security community uses the best available method to secure the given task/stack already.
In my opinion the SSH example with a non default port, random username and easy password is a perfect example of a bad kind of security through obscurity: instead of a user friendly and foolproof approach (disabling password authentication and using keys), we introduce multiple layers of obscurity that make life harder for the sysadmin and users, which collapse as soon as someone creates an account on the box without a sufficiently obscure name. When it inevitably fails (either because of the aforementioned reason or because a global scanner has the clever idea of trying some more obscure usernames) everyone looking back on it will wonder why you built this Rube Goldberg machine instead of just using SSH keys.
Changing the RDP port is a slightly better example of actually using security through obscurity as a defensive layer because Microsoft doesn’t give you any good ways to lock down RDP (best practice is of course keeping it behind a VPN or using a Remote Desktop solution with a more modern authentication system), but from a practical point of view I know several companies that were hit with ransomware this year via RDP on a non-standard port. I think they would rate the risk reduction from that approach pretty low.
Finally, symmetric database encryption is not an obscurity measure, as the author himself points out it specifically protects data against an attacker who can query the database but not find the key. Whether the attacker can get the key is a matter of capability not determination or luck.
People have been misinterpreting "security by obscurity is bad" to mean any obscurity and obfuscation is bad. Instead it was originally meant as "if your only security is obscurity, it's bad".
Many serious real-world scenarios do use obscurity as an additional layer. If only because sometimes, you know that a dedicated attacker will be able to breach, what you are looking for is to delay them as much as possible, and make a successful attack take enough time that it's not relevant anymore when it's broken.