Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The long (after)life of some of our old fileserver hardware (utcc.utoronto.ca)
48 points by rbanffy 8 months ago | hide | past | favorite | 22 comments


This is considered old? And someone thinks it doesn't have enough RAM?

I still run a Bulldozer 8150 from 2012 as a full time server. RAM is dirt cheap so it has 64 gigs. This isn't even an old server by my standards, but it's the oldest server where I'm running the same kinds of things I'd usually run on newer hardware (SearXNG, Akkoma, qemu with multiple VMs, et cetera).

It's trivial to max out memory on old systems with a small spend on eBay, and lots of us have had VMs with ancient JVMs to deal with shitty, insecure IPMI that requires Java from the turn of the millennium.

It's funny how old servers, like my AlphaServer DS25, have no problems doing things correctly - yes, the OOB management can be made available via the network, but by default it's accessible via serial. I personally don't want a system that defaults to insecure if the CMOS battery dies and default settings are loaded. But unfortunately, most IPMI is both insecure and defaults to on. So, for these reasons, I'd trust an AlphaServer, but I wouldn't trust most newer machines with IPMI, particularly ones old enough to possibly have a CR2032 that could be nearing end of life.


I went down a shallow rabbit hole (turns out https://www.reddit.com/r/uptimeporn/ exists ... of course) and found this:

https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fa...

(6430 days as of 2 weeks ago)


There used to be a website where people could create an account and report their system uptimes. It even had its a client to automate the reporting.


> 6430 days...

Wow wow wow. I had one that nearly reached 10 years of uptime: a dedicated server at OVH. At some point I just kept it to see how long it'd stay operational without any reboot.

Now, kids, don't try this at home: of course it wasn't secure. At some point it got hacked and that's when I cancelled the subscription for that one dedicated server. I still have other servers at OVH.

But yeah: I made it to 3400 days of uptime. Something like that. Don't remember the price of that one server but it wasn't expensive.

Let say it was an... Experiment.


https://utcc.utoronto.ca/~cks/space/blog/sysadmin/HowWeSellS...

Perhaps the most interesting (though sort of irrelevant) part from a linked page on the site is that "cloud" storage is being sold as a one-time cost. A great deal (depending on price) if you ask me!


Pricing and charging for storage inside an organization is always ultimately a non-technical decision that has to balance who pays for it versus the consequences of it not being paid for. This is especially the case within organizations like universities, which have unusual funding and funding patterns (for instance, one time capex is usually much easier than guaranteed ongoing opex). We (the people providing the disk storage) know that there are ongoing costs to doing so, but the non-technical decision has been made to cover those costs in other ways than charging professors on a recurring basis.

(I'm the author of the linked-to entry.)


Oh yes, I recall the fun of that, from many years ago.

One of my favorites remains when we were prototyping our next gen of servers for some compute next to quite a lot of disk. We had a prototype design, but we weren't done testing it, and someone needed a grant spent _now_, so they bought that design.

Unfortunately, that was the Dell R715/R815 family, which you may recall, had some...unique performance characteristics, so we didn't go with those for the final model, but had to deal with the support of them thereafter.


I have often wondered if you can come up with a one-time price for storage and guarantee that you’ll be able to store the data forever. Disks prices keep dropping so it should be in theory cheaper and cheaper to store data.


Nothing wrong with using old hardware until it dies (provided you have appropriate redundancy, of course).


I am still running an HP K380 server with the HP-UX 10.20 operating system. It has 250mhz risc CPUs and takes 220V power.

This server is likely on the far side of "[nothing] wrong," and we had it turned off and ready to e-waste.

However, the effort to rewrite the code that runs on this machine had some surprising setbacks, and I had to wheel it back in. It will likely be here after I'm gone.


The way you push to migrate (even moving to a QEMU-hppa binary emulation environment) is the following equation:

cost(maintain) is the cost of keeping the system running as it stands right now, with the risk of 100% "it's gone" loss.

cost(recover) is the cost of rebuilding what you have from wreckage, including any monetary loss from SLAs.

cost(port) is the cost of operating plus the cost of ensuring compatibility at some level (or just "rebuilding it for functional equality"), with the inherent risk of Second System Syndrome.

when cost(port) + cost(maintain) < cost(recover) by some factor (say, cost(port)^2 ) then it's time to port the system in-place after documenting its behavior as closely as you can. If you have source, that's great -- it's a UNIX system, you should be able to dredge most of it over and might need to patch in parts here and there.


Donate it to a computer museum instead. It’s an interesting machine, from a dead lineage.

Or sell it on eBay. HP/UX boxes tend to be expensive, for reasons I can’t really understand


> HP/UX boxes tend to be expensive, for reasons I can’t really understand

Probably:

> However, the effort to rewrite the code that runs on this machine had some surprising setbacks, and I had to wheel it back in.


You can still emulate them.


That is part of a solution but that emulation still has to run on something. Getting that emulation hardware certified for the application could be prohibitive. Are there drop-in replacements for the hardware that perform equally to the HP boxes?


If there are legal requirements for certification, then you are out of luck, but you can probably convince HPE to do a pinky promise their new boxes will perform as well as the old ones.


I have, of all things, a logic analyzer that runs HP/UX 10.20 (HP 16702B). Of course, it's not on unless I'm actively using it, so I don't have to be concerned about the power consumption, unlike a K-box.


At some point, power efficiency may become an issue.

Let's say you server uses 100W, continuous and runs 24/7, that's 876 kWh per year, at $0.20/kWh, that's $175 at the end of the year. Something like a Raspberry Pi is maybe $100 with shipping and accessories, and uses less than a tenth of that power, the hardware will pay for itself in less than a year.

Note that the "old hardware" is an enterprise class, 10 year old file server. Hard to replace with a Raspberry Pi, but some old servers could be. Virtualization is another option, a common one for businesses.


I agree with you in general but at some point the less efficient older hardware costs more to do run than something newer and more efficient. It doesn’t have to fail to be obsolete. This is a simple finance and accounting exercise.


I just finally decomm'd some supermicro X9 boards that required me to keep a copy of firefox portable and either use Java 7, or configure out some security settings on java 8, but they weren't that hard to maintain/admin.

Your stuff seems like it's at least a gen newer, and probably really isn't that difficult to admin at the BMC level.


8 years and still going on a Dell R430 rack server (8X SSD in RAID 10, 2X Xeon CPU, ...)

One of the drives failed last year, but used one of the hot spares on standby to put out the fire as it were.

Amazing longevity, perhaps I've just gotten lucky while avoiding obscene 24/7 cloud instance costs. It will die at some point, much like the HN server storing these very words.


> This particular server is a "bring your own disks" storage server for people who want bulk storage that's more limited in several ways than disk space on our fileservers, but (much) cheaper than what we normally charge

Just curious. Is there a commercial version of such service ? Something between cloud and co-location.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: