Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Considering colocated hosting over cloud, what should I know?
48 points by bluehatbrit on April 18, 2021 | hide | past | favorite | 30 comments
I'm a 20 something software engineer building primarily web apps, apis, etc. Most of my career has been around cloud based hosting on AWS and friends. I run quite a few side projects and like to have space to experiment and try out new technologies and ideas.

For the past few years I've been doing this across AWS and DigitalOcean but I'm starting to think I can probably get more bang for my buck by colocating a rack server and spinning up a few VMs. The up front cost of a used rack server don't seem too bad and the monthly colocation costs are then much better than what you can get for the same price on a cloud provider.

I'm pretty happy to spend the extra time and energy on the management side since it's mostly side projects and experimentation that I'll be doing with it.

Before I take the plunge and give it a go, I'm wondering if there's any gotcha's I should know about that aren't immediately obvious to someone who's not done this before? I'll be looking to colocate probably a single 1U or 2U server somewhere in the UK if that makes any difference.




Don't do colo, do a rental dedicated server instead, its almost the same thing but much cheaper in practice, and it won't mean having to travel to a data-center during work hours. If you absolutely want to manage your own hardware but it in a cabinet in your house instead.


Exactly my thinking as well. Many dedicated server providers allow you to connect a virtual console to the servers, so you really have a lot of ways to fix e.g. a broken OS install.


I agree with the other posters... Start with bare metal dedicated.

Checkout Web Hosting Talk: https://www.webhostingtalk.com/forumdisplay.php?f=36

For some deals, pick something with low latency to you... and start there.

I’ve rented hundreds of servers over the years and only done colo a handful of times. A fine experience to have, but learning you need a KVM/IPMI through a dedicated server to start, and experiencing the ropes of hardware failures is going to be easier with a dedicated.


I used to lease 4 42U cabinets at a Tier-1 datacenter in Phoenix. Datacenters are cold, loud, lonely places. If you're just looking to host a few small VMs, it doesn't make sense to go through the trouble of racking up a machine. When the machine fails, and it will, absent a remote 'lights out' (IPMI) system, you will either have to pay a 'remote hands' fee or schlep down to the datacenter to figure out why. :(


One shouldn't even consider colo without IPMI or an equivalent.

And I wouldn't do it unless I had enough redundancy I could let failed machines (or disks) sit unused for a few months, then get them fixed all at once.


A good colo provider can hook your machine to a masterswitch, serial console and put a PXE server on the LAN (or just stick an ubuntu livecd into the drive) so you can solve many problems on your own without relying on IPMI or asking remote hands to help you.


You'll still need a plan to deal with things that you can't do over a virtual console or IPMI and PXE. Replacing failed hard drives, power supplies, DIMMs, etc.


If it's just for experimentation I'd say get an Intel NUC or similar system. My homelab ( https://www.reddit.com/r/homelab ) is an i7 with 64GB of RAM and a 2TB NVME drive connected to my home fibre Internet. It runs multiple VMs and some of those VMs run dozens of docker images. Most is testing but also I have Nextcloud and other "production" systems on it.

Total cost in Australia was about $1200 for hardware and it will last me 3-5 years depending on availability of RAM upgrades in the future. My last homelab lasted 8 years on 32GB but then I started playing with things like Elasticsearch in docker :D


How is the performance “penalty” when running Docker in a VM?


For my use case I don't notice any impact but I'm not driving it that hard. My nextcloud has maybe 2TB of data and 6 users, everything else is basically one user and test data.

The only reason I run it in a VM is so when my hardware dies it is a lift and shift to get it all working on another box. (Had to do this a month ago and it was seamless).

edit

Oh and all my VMs mount their main storage as NFS from the host, this allows me to do some HA trickery for testing and backup one system only. It's all internal networking so no actual network traffic generated over my switch.


You can get a bare metal server for <£50/month with OVH in the UK or slightly better value for money with Hetzner in Germany. Colocation is likely more expensive all things considered.


The only possible answer is: "It depends".

I spent way too much time than I can account for on the command line of colocated machines. It's a great learning experience, but you're really on your own. There's nothing teaching you that as much as exposure to a real server.

I'll never forget the experience of just also installing asterisk via apt-get back then because it seemed fun, then forgetting it and a year later getting some call from the abuse department because my server did weird things.

Logging in, seeing strange processes and not knowing any better than to nuke it from orbit and losing two of my customers' websites without recent backup.

I wouldn't take a vanilla colocated VM these days over any solid cloud provider, but then again everything that I've learned about WHY I wouldn't do that and the internals of linux came from tinkering.

Maybe it's because I'm spoiled by magically working loadbalancers, reliable DNS, live migration of failed machines.

Maybe it's because I just feel too old to call people for failed hard disks or I'm on call for a 40 million user webpage.

So sure, if you don't have much to lose or your load is _extremely_ compute and traffic sensitive, go ahead — it's a great way to get started.

If your goal is to build the next Facebook there though, do yourself a favor and start on a reliable cloud provider. The massive difference in pricing exists for _something_ — mainly for you to stop worrying and work on your core product instead. Which is good: Opportunity cost is real.


In your opinion what's a "reliable" cloud provider (AWS excluded)?


1. You don't want to use colocation but renting. Seriously, no reason to do that: colocation is more expensive and gives you no any advantages unless you have a very specific setup. It's even worse when a hardware component dies.

2. Do some research as to which provider gives you what you need for the best price. I tested many and finally stayed with Hetzner, your mileage may vary.

3. If you do anything serious, think about backups and failover first.

4. Don't consider RAID as backup. It can happen that multiple drives fail at once - and it does really happen.

5. Be prepared for a drive failure. It doesn't happen often, but when it does, you better start the diagnostics and rebuilding the array straight away.

6. Think about your failover strategy: will you be fine if your machine burns in a datacenter fire? If not, rent at least two machines in different locations. Making sure they're in sync is your duty and an interesting challenge in itself.

7. How are you going to send e-mail? If not via a smarthost (an external service by Amazon, Google etc.), you need to configure your server and DNS properly, add specific quirks for Google and Microsoft and be patient - building reputation takes time.

8. Do you need ECC? If you don't, you can use desktop CPUs with much better bang for the buck. [0]

Most other quirks are related to your use cases, e.g. if you plan to do streaming, there are other factors at play that might influence your choice.

[0] https://jan.rychter.com/enblog/cloud-server-cpu-performance-...


Of course it depends; but an option that has recently become available in quite a few areas is getting a "Business Internet" account rather than the typical consumer Internet account for your home. The main difference being 1) as a business you'll receive symmetric service, meaning the same high speed upload as your existing high speed download, 2) because "you're a business" you're expected/allowed to be running servers and 3) the Business Internet accounts I've seen come with an SSL cert. I recently moved from Los Angeles to Denver, and in both areas such "Business Internet" accounts are available with telephone bundles at about the same price as a consumer Internet + streaming media services. And you can keep your same streaming media services, they don't care if your Internet is consumer or not.

Once you have the Business Internet, run a PI cluster, get a NUC on the faster side and load it with VMs, serve your resume from a static site off you're old not-used-anymore PC, do whatever you want.


This is my setup. I have a midsize Dell server in the basement and Business Internet. The internet package is symmetric speed with a static IP. The price is about $10 more than Consumer Internet.

I have peace of mind and a consistent solution to hardware issues: I am the service provider.


Do not co-locate. It is expensive. Just rent dedicated server from Hetzner. There are others but Hetzner seems to have best bang for the buck.

If you want to play with the hardware it is best to self host it assuming your Internet pipe is good enough for whatever tasks you envision. I have fat pipe so I self host but I also keep standby and some other dedicated servers on Hetzner.


Before purchasing and installing your own server, I would try out a bare metal server from a company like OVH for a couple months. You'll get a better sense of the additional setup steps necessary (as compared with a VPS) without buying hardware or committing to a collocation contract.

I've learned a ton (and saved a bunch of money) transitioning from EC2, RDS, and Elastic Cloud to a single OVH dedicated server running Proxmox to host multiple VMs.


You need to pay for rack space, you need to pay for power/cooling, and you need to pay for transit.

Colocation in the UK tends to be pretty expensive. Unless you're able to get it for free / at a significant discount on mates rates, it's generally unviable.

If you've reach the point that it is economically viable for you (considering the up front cost of the hardware + ongoing colo costs), you now need to factor in the ongoing support costs of the hardware. You may get lucky and have nothing break (happened to me twice over ~8 years), you may get unlucky and lose PSUs, DIMMs, disks, fans, controllers, etc. The severity of this depends on your particular hardware configuration. When things break, you either need spares on site, or you need to take/ship spares to site. You now need to either get remote hands to fix this for you, or you need to fix it yourself.

Honestly, you're almost always better off just using one of the cheaper cloud providers and/or renting bare metal than trying to do this.

However, if you've determined that you're in that sweet spot where it does actually make more sense to colo a server (as I did for many years), then go for it, and have fun!

Like a lot of other commenters have noted, make sure you have an IPMI/ILO/DRAC interface to your server, you probably want that sitting behind a firewall rather than being directly exposed to the internet (so now you need a real firewall as well to protect your IPMI interface) unless you want your server pwned in no time.

If you're thinking you can't do it at home due to space / noise constraints, look at smaller sized units like NUCs, laptops, or other SFF PCs that can be used as virtualisation devices. You wont get as much bang for your buck as you would from a used rackmount server, but the administrative overhead costs are a lot lower than dealing with DCs.


I’m not the right person to estimate cost savings, but in my experience these choices go in a cycle:

1. Startups use aws /digital ocean because its so easy and $5-10/month isn’t much

2. Startup or consultancy with traction: either you have too many projects or they start getting real traffic so the instances sizes and bandwidth / storage costs grow

3. Small company: You realize self hosting is probably cheaper and the machines are way faster self hosted

4. Medium size company: You self host and seem be saving money, but eventually you scale to need a team to support the infrastructure itself

5. Large company / enterprise: You go back to aws because the costs are high, but that is still preferable to having a bad version of aws written adhoc internally :)

For you it probably doesn’t matter as you’re probably 1-3 above. So it mostly depends on if you like managing infrastructure yourself. The cost savings are likely there, but only if it’s easy and fun for you to support it


There's more:

6. you have so much infrastructure that it's a large factor and you want more control so you create your in-house cloud (or compete with cloud providers and can't trust using it)

7. you have so much infrastructure that you offer it as a service

I worked at a company that had a mix of 4 and 5. The advantage of 5 was the dynamic scaling and integration with other offerings (e.g. AWS DynamoDB, SQS, Lambda, etc). The provisioning and monitoring of hosts was about the same with bringing up new hosts slightly easier on AWS. Provisioning large additional capacity needed more planning as it wasn't delivered same day, especially for the largest configurations. Raw performance was something else though with battery-backup caching (important so you can skip write-thru sync and still flush after power loss) hardware RAID NVMe's configured to spec.


You don't gain much by being in a colo over hosting at home for side projects.

A Dell R240 or R340 would be fine and on the smaller side, or if you're really pressed for space look at some of the Supermicro machines.

You'll gain local network speeds by keeping it in your home, and save a bunch of money


Those guys do a pretty good service. They have presence in London (Brick lane) and Midlands. They also do dedicated servers. https://www.veloxserv.co.uk/

Remember that a colo means that if you have a hardware issue, you have to go there so have one close to you. Otherwise, there are "remote hands" you can hire per hour.

Also, if you don't have a specific hardware need, it's likely that a dedicated server would be enough. If there's a problem, you always can swap it for another one in a very short time. If it's our hardware, you're responsible of everything and that might not bring any positive point on your side.


It’s almost an Android vs IPhone question. I would say you should double or triple the amount of time you’re expecting to spend administrating the system environment. I have been running a dedicated server for similar purposes for years now, and while the fun and independence is a big pro, the OS upgrades, package updates, networking issues and sole responsibility for server security can be a big con. Especially when you want an environment that “just works” to explore a random project idea. Lately due to support and maintenance issues I’ve been considering taking the plunge in the other direction and migrating fully to the cloud.


I have a £200 6th gen i5 with ssds running at home (hp prodesk). Costs about £2 a month to run. I spent the last decade building, deploying, managing colo servers. If you crack ipmi you won't be missing much on the learning front. I point all my test urls to this box using duckdns cnames. In colo you will pay for power which you do not need.


One drawback of having your own hardware that I haven't seen mentioned yet is that it sucks when something breaks. Then you have to go pick up your server, figure out what's broken, order new parts, install it and then take it back to the co-lo.


Leaseweb is very affordable and I have had good luck with them

https://www.leaseweb.com/dedicated-servers#US


Last time I checked, a lot of torrenters, spammers and pirates use Leaseweb so the IP's may be blacklisted for certain things like e-mail.


unless your time investment that you’re ok with includes time spent doing basic sysadmin and other such upkeep, taking away time from your actual objectives, then stick with cloud


Don’t.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: