I have a home server with one of their X11 boards, it is almost 8 years old. It is a bad time to upgrade but once I do it will have to be one from ASrock Rack I guess.
I remember seeing in a Popular Electronics Magazine, in the late 60s or early 70s, stating "There will never be a Blue LED". Despite looking I've not found that issue again.
I generally agree. But then again, we had Master/Slave IDE connectors, floppy drives, _extremely_ shitty CPU sockets (broke plenty of Sockel A / 370 cooler latches), nothing (including keyboards and mice!) was hot-pluggable ...
Regarding your last point: that's just market segmentation. Plenty of lanes on server CPUs. Remember Linus' rant about Intels refusal to offer ECC for consumer CPUs?
I did the exact opposite. And by that I mean physically moved my homelab into their colo earlier this year. Runs like a charm, costs about 500€ per month total.
Sounds like a lot, but I was almost paying the same before - 220€ for power at home, 110€ for a dedicated Hetzner server, 95€ for a secondary internet connection (as not to interfere with the main uplink used for home office by my partner and me).
Not having to deal with the extra heat, noise and used up space at home anymore has been worth it as well.
My storage needs were increasing by the day. Electricity is now a small monthly cost. I have more cores and ram than ever, and can easily expand it. Main machine now runs with 1TB ram and 15TB SSD and other has more than 384G ram. I currently use 3TB ssd storage, and get way more performnce than Hetzner's VMs with ceph ssd disks. I do need redundancy, but it's not something hetzner was giving me anyway, and if my anectode is not a mess up, i actually got database corruption on hetzner that never happened on my own local setup.
I'd have colo'ed or used dedicated as it's definitely better than their VMs, but they don't have that in their US datacenters.
I am pretty happy with my current setup, I have significantly less down time (few mins a month) than when I was on hetzner - but this is mostly due to my need for more ram at times.
I also used this as an excuse to get 56G mellanox fiber switch and get poe cameras etc in a full homelab manner, so it's been fun, on top of being cheaper. Noise is not a concern, I got a sound-proof server rack that's pretty nice. It takes up space, but i have kids, so my garage is near full at times anyways :)
I'm hosting my own internal CA using Hashicorp Vault and some ansible + CI. The root CA is valid for 20 years, intermediate CA 10 years, client certs three months.
Initial setup is a handful of commands interacting with Vault's CLI, from there, with CI in place, client certs are renewed automatically. Services are restarted / reloaded as well. Works flawlessly.
I should maybe write a (small) blog explaining how it works.
I am running my own private CA as well, powered by Hashicorp Vault, Ansible and Jenkins.
The Vault initialization and configuration is more or less manual (just a bunch of commands, I have them in my notes). From there I am using an ansible role based on the hasi_vault module [1] which is run by a Jenkins job every night, logging into each target system, renewing certs if needed and reloading services.
Has been working very well for about a year now. Of course, there's a little more technical context needed - my CA needs to be present on all systems interacting with it, and my CI needs to be able to log into each target system (SSH keypair + sudo user). This ties into the rest of my infrastructure, which is managed by Terraform and Ansible.
I might write up a small blog post about this if I find the time.
> Going forward, customers must order the full server system to obtain the motherboard.