Hacker Newsnew | past | comments | ask | show | jobs | submit | pandog's commentslogin

Came here to post this - this was a great (and short!) read to help validate if your idea could be something somebody wants to pay for.


I think a high definition photo taken on a recent phone takes up an awful lot more device memory than a "big number of chats"


Yeah, but Whatsapp chats tend to be full of those... and videos.


(On Android), if you don't care about the (old) WhatsApp media, just delete it from your phone. It's all just loose files in `/storage/android/data/com.whatsapp` (or thereabouts). The text content of the chats will remain available.


Whatsapp automatically resizes them (in standard settings)

But it still gets big.


Though I would add that looking back to when I was an IC, my Computer Science degree hadn't given me much if any formal training in Software Engineering (especially in a large team and code base) and I mostly learnt by doing that also.


The industrial/corporate training provided at the university level is generally very very low. Doctors/Dentists/medical professionals are often thrust into systems where they are expected to handle billing and patients and paperwork and legal responsibilities. Experience during a residency is not sufficient if you are essentially starting your own business, even if you are joining an existing office.

Becoming an employer with responsibility for employee supervision, benefits, payroll, rent, etc is ignored in medical school.


fail2ban is a real pet peeve of mine because anyone security conscious enough to deploy this will have likely already mitigated any actual security risks this could help with either by using a strong password or public key authentication.

That leaves noise in the logs - which sure, it's nice to reduce, but using an alternative port can help here.

I may sound like a spoilsport - but the fact that there have been a number of security vulnerabilities (https://www.cvedetails.com/vulnerability-list/vendor_id-5567...) in this project, make it worse than security theatre, it actually increases risk whilst not at all reducing it.


Yes. At this point, fail2ban has become almost a shibboleth for people following security checklists as opposed to reasoning about a coherent threat model. This is a perennial topic on HN, and almost always devolves to some appeal to grooming logs, because of all the authentication errors fail2ban is presumably preventing.

Don't use fail2ban. (Don't use passwords, either!)

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


I am one of the people to whom you refer. I read about fail2ban in a "Linux Server Bible" e-book around 2010 and have used it on all of my servers since, even though I am careful with my keys and use password-less login.


Does fail2ban have authorization to write firewall rules? That's a high-impact vector of attack, should fail2ban have a vulnerability. Also, does fail2ban store credentials that provide that authorization?


Yes, no.


It runs as root.


While I agree fail2ban is a wrong tool to prevent password brute force - better authentication mechanisms should be used instead - it has its uses. For example, it can be used to automatically ban (or alert about) dumb http scanners like gobuster. I am not saying, a determined attacker cannot bypass it, but if it saves me some hassle and raises the bar for them, why should not I do it?

More general, some attacker actions, especially during recon, rely on making many attempts to connect, fetch an URL, resolve FQDN, etc., these could be detected and automatically responded to, making attacker’s job harder and providing extra visibility to defenders.


You shouldn’t use it because fail2ban itself can (and has been) attacked. It doesn’t make the attackers job meaningfully harder but does add complexity to your systems, that complexity is weakness.


I looked at fail2ban exploits and they are either LPE due to file permissions or command injection in other tools like mailutils.

Citation needed for the claim “has been attacked” if you refer to real attacks in the wild.



Yeah, that’s command injection in mailutils I mentioned, not in fail2ban itself. Did you see how it’s supposed to be exploited? Did you see a real-life exploitation?

While it’s a nice trick, it’s simply not relevant. And the vulnerability before that seems to be 10 years old. I’d say it’s a decent track record.


I use fail2ban because I take break in attempts personally, especially when it's some script trying default logins one after another. It's insulting.


You have that exactly backwards: if someone is hitting you with a password bruteforce from a single IP address (which is the only threat that fail2ban mitigates) then it is assuredly nothing personal at all.

A personal insult, if you are ever unfortunate enough to receive one, will be much more stealthy and neither fail2ban nor any other magical rock will protect you against it.


You don't see any tigers around, do you?


Usually they use a big pool of IP addresses, but that doesn't make fail2ban completely useless since they do reuse IPs.


> It's insulting

Brute force / credential stuffing attacks against ssh are the mosquitos of the Internet.

Ubiquitous, annoying, and persistent. But nothing personal.


I also take mosquitoes personally, so maybe it's a larger character flaw on my part.


If anything, you're doing them a (miniscule) favor by keeping them from wasting more resources on failed login attempts. If you really hated them, you'd set up a honeypot.


More fun: setup a fail2ban actionban script that instead of banning the IP, shapes the traffic coming from it to have abysmal bandwidth so requests/responses takes really long time, so they'll have to timeout instead of getting failures.


This is known as tarpitting, and apparently iptables can do it: https://en.wikipedia.org/wiki/Tarpit_%28networking%29


Neat, didn't know that! Think before I've used Traffic Control (tc) for it, but iptables would be simpler.

Available in `xtables-addons` it seems. After install:

    iptables -A INPUT -p tcp -s $SOURCE_IP -j TARPIT # add IP to tarpit
    iptables -D INPUT -p tcp -s $SOURCE_IP -j TARPIT # remove IP from tarpit


This is hilarious


Restrict your SSH login to a single user, then su to your admin


I know how to lock machines down. That's not why I bugs me.


Apologies, I guess I just wanted my two cents in and didn't see anyone writing it


No, it's solid advice. Thanks for looking out for others.


Grooming logs from attempts seems like shibboleth on its own that is indicating junior level or “security enthusiast”.

Anyone who manages servers professionally does not read logs anymore and does not care about obvious things like people brute-forcing.

Reading ssh logs on your single VPS is security LARPING. Discussing faill2ban as well :)


As that gaps of inequality of our winner takes all societies ever widen, anyone just trying to do something for themselves and not for global scale SV company is just meaninglessly role playing.


Not really. Only that reading logs by human or even grepping "manually" is super inefficient where you can make a script that will for instance send you a notification when someone actually logs into your VPS.

In world where "login attempts" are basically all day reality reading logs is meaningless. In 2023 no one should be reading logs you should have alerts on events. In 1995 or something if someone was trying to brute-force your user password that was security event to look at and block IP. In 2023 someone brute-forcing is not an event, it is either wrong configuration like not using ssh authentication or not using tools that filter logs automatically and make alerts when something is actually going on.


> That leaves noise in the logs - which sure, it's nice to reduce, but using an alternative port can help here.

No, it cannot. As a sysadmin I do not want to get into user training about telling people about alternative ports and tweaking their CLI habits and any scripts that they have.

If you want to further cut down on the log noise get an IPv6 address (and drop IPv4)—good luck to anyone trying to scan a /64 for open ports.


I can cofirm this, I swapped one of my cloud VMs to ipv6 only ssh and after 11 months I never seen a single ip besides mine attempt to login. This was using the default port 22.


I read that Shodan was running NTP servers to figure out active IPv6 addresses :)


My lame provider (comcast business) wants $20/mo for ipv6.


You can scan ipv6 because the addresses aren't arbitrary. Blocks have to be purchased and then ranges within routed.


Individuals usually get a /64. Scanning a truly random address in that range is not feasible.


You can try, but a lot of ISPs assign a big subnet to each user. Mine for example assigns a /48 to each home user fiber connection.

Even if I make no effort at all to hide things and just select xxxx:xxxx:xxxx:1:: as the subnet (leaving a factor 65535 options on the table) the devices behind it will randomize the next 64 bits meaning you'll have to scan 18 quintillion (1.8e19) addresses to find one.


> That leaves noise in the logs - which sure, it's nice to reduce, but using an alternative port can help here.

Shifting services to alternate port numbers will stop very stupid scanners but it does not stop the worst offenders IME. Basically it just means you'll only get the really obnoxious sources that try everything ignoring responses.

> I may sound like a spoilsport - but the fact that there have been a number of security vulnerabilities (https://www.cvedetails.com/vulnerability-list/vendor_id-5567...) in this project, make it worse than security theatre, it actually increases risk whilst not at all reducing it.

Given the age of the project and that there's been a whopping NINE vulnerabilities found in its lifetime, this is a great take. By this same logic you better disable OpenSSH everywhere. In the same timeframe as Fail2Ban has has reported vulnerabilities, OpenSSH has had at least 60: https://www.cvedetails.com/vulnerability-list/vendor_id-97/p...

"Worse than security theatre" is quite the statement given they reported and fixed those issues in timely fashions.

If you apply the principles of defense in depth, using the network layer to deny access to misbehaving remote hosts is an obvious win on a lot of fronts and hardly qualifies as security theatre anymore than using a network firewall is security theatre.


It's not 9 vs 60, it's 9 vs 0 if you don't use it, with no loss in functionality. And GPs point is that it's not defense in depth, the vulnerabilities in Fail2Ban can compromise the security of other layers.


If we limit the use case to a single service, fail2ban is just a log cleaner. What it's detecting is merely the service that you're protecting doing its job properly. Now if you analyze the collected data and do something smarter with it, that's another story.


"Don't use fail2ban because you don't need it if you do XYZ"

I'm not so sure that's a good reason to be honest. And if you're worried about CVE's, well, you'll be using handwritten, hand delivered notes before long. Keep your systems patched, keep them tidy, none of this is likely to affect you, fail2ban or not.


To put it another way - there is no security risk that fail2ban helps with that can't be resolved in another, better, more robust and less risky way.


But it also helps in reducing the load on your servers when, e.g., instead of 300+ login attempts per minute on your mail ports, you get 20 because the IP gets banned for a day after 2 failures. Or, instead of nginx spending 90% of its time sending out 404s for the various PHP and MySQL holes I do not have installed, it can spend 10% of its time instead.

Particularly on my small server, fail2ban is the difference between "usable" and "on the edge of falling over".


Parent says there are more robust solutions to these and there are. Rate limiting is one that has been in use forever for example.


Yes, I'm rate-limiting by using fail2ban to drop traffic that I don't want.


If you’re a hobbyist sysadmin setting up a personal VPS then the security risk is your own competence in correctly configuring things the better more robust less risky way, but you can’t replace yourself with a more competent sysadmin in this scenario, so fail2ban helps to Swiss cheese model this edge case.


Excuse me, if fail2ban is frowned upon, what is the alternative to block crawlers that try to find wordpress or php endpoints on my website, two software that I don't have installed?


The idea is you don't have to block those since there is no attack surface.

I look at the imap login attempts on my server sometimes. The passwords they try are usually pathetic. Nothing close to the 15+ character actual passwords we have in use.


So the idea is I shouldn't need an alarm system in my house because all my valuables are kept at a safe that can't be opened by anyone but me?

I disagree with this, 404 queries still use resources and someone trying URLs in a matter of seconds should be blocked nonetheless.


Saying anyone who makes mistakes is just incompetent is really just a “no true Scotsman” argument.

Everyone makes mistakes. That’s the whole point of the Swiss cheese model and of layers of security in general.


> […] that can't be resolved in another, better, more robust and less risky way.

Only if you can get business/users/management buy-in or approval for implementing those ways and changing workflows.


I'm rather convinced that people reaching for fail2band actually want rate-limiting.


Often, they do, however configuring it for different applications may be a bigger effort than doing so via fail2ban with minimal log parser tweaking.


This is true but on an active server legitimate users getting blocked far exceeds the convince of having cleaner logs in my experience.

(Blocks always have to do with saved passwords being used from a nonwhitelisted IP)


Yup. I see many resources for self-hosting recommend fail2ban for e.g. SSH. But I always disable password-based SSH logins on all of my computers. The one niche use case I can see for fail2ban is possibly reducing the amount of hits to /wp-login.php and /cgi-bin in your web server (or reverse proxy's) access logs.


Doesn’t it help to mitigate DoS type attacks by reducing the amount of CPU that a bad actor can burn?


If someone is performing a denial of service attack from one I.P. address then this will help.

To tptacek's point, you've got to ask yourself is a denial of service attack in your threat model?

The reality is most folk set up fail2ban after seeing auth failures in their logs, not service degradation.

If you're considering a denial of service attack in your threat model, then I'd probably also consider a DDoS attack and there are likely more effective solutions here (a firewall or CDN).

And don't forget you're using some of those precious CPU cycles to parse the auth logs, with python no less :-)


>And don't forget you're using some of those precious CPU cycles to parse the auth logs, with python no less :-)

You can ship the log somewhere else, do the fail2ban there and perform the block action in another place up the stack.


f2b can also do an (r)whois lookup and ban netblocks.


You can do it with ufw limit too


You can also literally have anything pipe rules into it. Want WordPress auth to result in fail2ban-enforced bans? You can do that. Want cheap rate limiting? You can do that too


This thread seems pretty full of people dismissing the project based on the idea that it only protects against ssh credential stuffing, and ignoring the other 99.5% of what F2B does.


Absolutely agree. Fail2ban, sooner or later, bans you from your services becasue something in the configuration went wrong.

It does not protect against anything serious: you must have proper credentials/MFA or certificates and therefore bots can check as much as they want.

There is no protection against DoS either.

And I agree about moving the port - I only see a tiny activity in my logs coming from bots when my ssh port moved away. Obviously 443 is there to stay (this is a public service) so I will get whatever comes.


One nice thing I'll say about fail2ban is that it can fire off reports with decent logs to the networks responsible alerting them to compromised systems and bad actors.


I agree that almost all use cases of fail2ban are little more than feel-good exercises.

Failed login attempts (the noise) are not where bad things happen. What we should be concerned with is if the attempt succeeds but is not from a legitimate user. fail2ban is no help there.

Having said that it might be a decent way to collect IPs. At one point I was distributing the collected IPs from VMs and blocking them for the whole network. fail2ban does provide mechanisms to do this.


[flagged]


You can point out you think someone is wrong without personal attacks. That's being an adult.


There are a bunch of projects from Tor to aid in circumvention of the great firewall of China: https://support.torproject.org/censorship/connecting-from-ch...


Don't disagree - but if I have a limited amount of resources to harden my Drupal server, it might be best to start looking at hardening around the most commonly exploited Drupal vulnerabilities.

Having said that, searching Druapl on the CISA know exploited list shows a number of remote code execution vulnerabilities that this would help mitigate: https://www.cisa.gov/known-exploited-vulnerabilities-catalog


Indeed! As an example, SA-CORE-2020-013 can be mitigated with Wasm. An that one is classified as Critical.


Implementing what you describe sounds to me way more "clever" and less robust than the canary page approach described above.

Specifically - I wouldn't fancy writing the "consistently anticipates their adversaries sneaking behind a wall" heuristic you describe but the earlier post describes the API that already exposes the "has read canary page" functionality.


These services help some issues but don't solve all of them, just a few off the top of my head:

Legal differences can be significant - for example, in France it may be difficult legally to ask an employee to put in more than their contracted hours compared to another country where this could be very normal.

A recent example in the news is the Twitter layoffs - in the EU they may have enacted layoffs that aren't legal there but are perfectly legal in the US.

Taxes in different countries can mean there's a significant difference in the difference between the amount an employer pays and the employee receives per country. Sometimes it's negligible enough for the employer to foot the bill, sometimes it's large enough they may need to pass on the difference. This can get even more complicated when share options come in to play.

If you are responsible for this it all really starts to add up, ultimately the more countries you employ in, the more cognitive overhead which can impact an organisation's agility (or require them to take more risks).


I agree with this, but I'd like to point out that the Twitter layoffs were likely not legal in the US either!

The difference is that Twitter is likely to be bankrupt by the time any American lawsuits get resolved.


Fair point!


Presumably in the short run it makes Google money - a great deal of those blocked ads are AdSense ads and Google gets cash for every click.

Though that obviously isn't sustainable when advertisers realise what's going on.


> but they don't want to use Docker, at least not immediately, because it requires to learn a few Docker commands

That's not what the blog post says. They don't want to use docker because it does a lot of things. They use rkt because it does fewer things. It's not about learning commands, it's about complexity of software. They selected rkt because it does less stuff, not because it's easier to learn.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: