Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wha? We host everything on our hardware (which is nothing special) and haven't had any downtime in this year (yet). And we're just another run of the mill dev shop, very far from "superstars" who work on these (supposedly extremely stable) platforms.


When I worked as an IT person for a cheap hotel back in the day, I set up a single Compaq PC as the only server (Samba NT DC, file sharing, IP masquerading, web/mail/DNS server, etc) in a closet. It was not on any battery backup, there was no modem. It ran for years without ever rebooting. In a city where power outages were normal.

Similarly, I've had EC2 instances run for years without ever being rebooted or going down. One of them's still running today after 5 years.

But none of those services were being used 24/7; if the internet went down for an entire weekend, or a hard drive was a little bit corrupt but kept running programs in memory, I probably would never have noticed. I've also had EC2 instances literally just fall off the map and sort of disappear, and had them manually replaced without notice by AWS, had virtual drives fail and corrupt, and had calls to services fail. And I've had my desktop's power supply get fried by a power surge.

Without a lot of experience, running systems seemed easy. But as time went on I learned that it can be easy, and it also can go down if somebody blows on it the wrong way. What we see as being reliable may just be chance. The only way to guarantee reliability is to expect that things are going to go down, and design and build it accordingly.

What AWS makes easy is they give you all the components for reliability, but you have to do the plumbing yourself. I'll bet you the people whose services went down did not properly design for reliability, as they were probably running in one region, in one set of AZs, and relied on distributed system operations that can fail, and didn't properly account for how to deal with those failures. One product I maintain on AWS did not go down, but another did.

Also, the more components a system has, the higher probability there is of failure. Big systems are actually more error prone than small ones.


Two questions I have regarding your in-house hardware:

1. How easy can I access your physical servers ? 2. What happens if there is a catastrophic failure, for example local power outage or a major flooding 3. How secure is your server? Are you regularly patching your operation systems 4. If I want to run a project that requires double the capacity of your current hardware for a specific project, how long is it going to take to get it spun up?


I'm running about 10 servers myself in production. They just do transcoding, so aren't mission-critical, but...

- Regularly patching is automated and took about 30 seconds to configure, using an automated script

Regarding running your own physical servers, that is a different ballgame, but for all of my projects, if I need to:

- I can pretty easily spin up VPSs / bare metal servers anywhere (netcup, linode, hetzner, etc) and provision there while I wait for new hardware to come in - If you want to double the capacity of your current hardware, you'll have to order it and wait, but it's cheap (vs the major cloud providers) to way over provision if you're running your own physical hardware, so you can pretty easily have 2-4x extra capacity and still come out with extra money in your pocket.

I host in the cloud, but I think people vastly over estimate how much it saves 90% of cloud customers.


There are many approaches which don't depend on AWS, and not all of them mean hosting your own physical servers, and they certainly don't mean you don't have an off-site backup policy.

There are well-understood answers to all your questions, they are not too difficult, they just cost money - some businesses choose not to spend that money, some weigh cost-benefit and go for AWS, some decide to go for in-house servers, some go for hosted Virtual Servers, some go for serverless.

Why does everything have to be built one way?


> they just cost money - some businesses choose not to spend that money, some weight cost-benefit and go for AWS, some weigh cost benefit and decide to go for in house, some go for hosted Virtual Servers, some go for Serverless.

I'd also say some choose not to spend the money, but fail to consider the cost of that choice.

For example: Doing old-school manual deployments that require herculean efforts to update at off-hours on the weekends burns people out, and makes it hard to attract new talent. In other words, you've made the decision to spend more money on finding and retaining people. But it's definitely way cheaper to pay for colocating a single Dell server you bought 3 years ago than what you'd spend in the same time on AWS.

And if your hardware never dies, paying for the redundancy might seem silly. A lot like paying for fire insurance despite the fact your house has never even burned down.


Your choice of straw man here is interesting.

Of the options I mentioned, you seem to have picked only self-purchased hardware with no redundancy and no backups (your addition) to compare with, and also throw in manual deployment (why?).

Most businesses are somewhere between a forgotten old dell server in the closet and fully hosted multi-region auto-scaling fully bought in to AWS, and that's OK!


i am not saying everyone needs to go for AWS and there are many good reasons for self hosting However, if the only reason for self hosting is a higher uptime than AWS or GCP, it is general false economy, and you are likely taking short cuts that you are not aware of.


This is a false dichotomy (I listed many options, not just self hosting, not many businesses fully self-host these days), and uptime is provably significantly better away from the big hosted services which are constantly churning features, config and hardware resulting in outages for everyone hosted on them.


Not the person you're replying to, but...

1: Access to the data center requires either an access card plus biometric verification (fingerprint in one DC in my case, retina scan in the other), or an ID-verified appointment. Then, you still need to know where my server is, plus the access code for the rack (or, you need to be an average lockpicker, but beware, there's cameras...).

2: Each data center has dual-provider AC feeds, plus generators, and provides A/B feeds to my rack. I've not had a dual-feed outage in the last 20 years or so.

3: No cloud provider that I'm aware of guarantees server security or does automated patching (for servers, not services). So, keeping your server up-to-date seems equally important for both cloud and non-cloud scenarios?

4: At least two weeks, I guess? I have sufficient VM host capacity to accommodate 30% unplanned growth, but 100% would require new hardware. So: ordering two servers, installing these in two data centers. But if the new project also requires significant bandwidth, getting new Internet connections in might take longer.

Look, I'm definitely not denying that "the cloud" makes it easier to scale fast, but scaling fast is not an overriding concern for most businesses. Cost is, and self-hosting, even with a pretty redundant infrastructure, is still much cheaper than AWS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: