Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have to agree. I put our stuff in a colo 2 years ago and never looked back. Pretty much all servers come with some kind of remote console interface IPMI, and that's not terminal redirection, thats actually a totally self contained microprocessor and ether port that you can run on a separate subnet and control your server even if it's off. I updated the bios, reinstalled OS's, all via IPMI which is part of the motherboards. Add to that power strips that you can also control remotely and you're all set. Our servers are in the Bay Area, I'm in Canada. I have NEVER had to drive/fly to fix anything. Never even had to use remote hands for anything. Sure some drives died, but standby drives are in place.

The costs are dirt cheap these days. You can get a full rack, power and a gigabit feed for about $800 in many colos in texas. We opted for equinix in san jose, which is all fancy with work areas, meeting rooms, etc when you are there, but the funny part is, we're never there!

I do like the virtualization for some maintenance/flexibility so we have a few servers that are hosts and we run our own private cloud where we get to decide where/what runs. Database servers on bare metal with ssd drives in other cases. Best of both worlds.

It's so cheap you get a second colo in a different part of the country to house a second copy of your backups, and some redundant systems just in case something really bad happens.

Oh yeah and don't get me started on storage. We store about 100TB of data. How much is that on S3 per month? $12,000/month! A fancy enterprise storage system pays for itself every couple of month of s3 fees.



I have NEVER had to drive/fly to fix anything. Never even had to use remote hands for anything. Sure some drives died, but standby drives are in place.

Consider yourself lucky. We thought the same thing, but when a RAID controller died on us recently we really didn't know what hit us. It didn't just stop working, it started by hanging the server every now and then, then after a day slowly corrupting drives, then after a day or two it stopped completely.


Im a bit conservative when it comes to hardware like raid controllers. My choice was 3ware. They are by no means the fastest, in fact the performance sucks compared to others. I went to a company that builds storage systems, but will build any kind you want, not locked into any controller. I trusted them when they recommended that by their experience is returned/fails the least. Of course everything fails, so it's just a matter of time. We have tripple redundant storage for file backup... active, 5 minute backup that is ready to be swapped in at one click, and long term. If something goes wrong with the active set or slows down, we just flip a switch and all our app servers use the new system that at most is 5 minutes behind. Old system gets shot in the head, and can be diagnosed off line. Shoot first ask questions later.


This is totally anecdotal, but I've personally had far more problems with bad RAID controllers than with dying hard drives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: