Hacker News new | past | comments | ask | show | jobs | submit login

This is the modern Beowulf cluster.



But can it run Crysis?


It can run doom in a .ini


I honestly don't understand the meme with RPi clusters. For a little more money than 4 RPi 5's, you can find on eBay a 1U Dell server with a 32 core Epyc CPU and 64 GB memory. This gives you at least an order of magnitude more performance.

If people want to talk about Beowulf clusters in their homelab, they should at least be running compute nodes with a shoestring budget FDR Infiniband network, running Slurm+Lustre or k8s+OpenStack+Ceph or some other goodness. Spare me this doesnt-even-scale-linearly-to-four-slowass-nodes BS.


> For a little more money than 4 RPi 5's, you can find on eBay a 1U Dell server with a 32 core Epyc CPU and 64 GB memory. This gives you at least an order of magnitude more performance.

You could also get one or two Ryzen mini PCs with similar specs for that price. Which might be a good idea, if you want to leave O(N) of them running on your desk, house without spending much on electricity or cooling. (Also, IMHO, the advantages of having an Epyc really only become apparent when you're tossing around multiple 10Gbit NICs, 16+ NVMe disks, etc. and so saturating all the PCIe lanes.)


Depends what you're trying to do, of course. But if your goal is to scale solving a single problem across many comouters, you need an interconnect that can keep up with your CPUs and RAM. Which means preferrably > 40 Gbps, and then you need those PCIe lanes. 100 Gbps is getting close to affordable these days; in fact dirt cheap if you're willing to mess with weird stuff like OmniPath.


Same. Also I don't get using RPis for hosting all kinds of services at home - there are hundreds of mini-pcs on ebay cheaper than Rpi, with more power, more ports, where you can put in a normal SSD drive and RAM, with sturdy factory made enclosure... To me Rpi seems a weird choice unless you are tinkering with the gpio ports.


You also get a normal motherboard firmware and normal PCI with that.

I don't know if my complaint applies to RPi, or just other SBCs: the last time I got excited about an SBC, it turned out it boots unconditionally from SD card if one is inserted. IMO that's completely unacceptable for an "embedded" board that is supposed to be tucked away.


The TDP of 4 PIs combined is still smaller than a larger server, which is probably the whole point of such an experiment?


The combined TDP of 4 raspberry PIs is likely less than what the fans of that kind of server pull from the power outlet.


The noise from a proper 1U server will be intolerably loud in a small residence, for a homelab type setup. If you have a place to put it where the noise won't be a problem, sure... Acoustics are not a consideration at all in the design of 1U and 2U servers .


You can buy it but you can't run it, unless you're fairly wealthy.

In my country (italy) a basic colocation service is like 80 euros/month + vat, and that only includes 100Wh of power and a 100mbps connection. +100wh/month upgrades are like +100 euros.

I looked up the kind of servers and cpus you're talking about and the cpu alone can pull something like 180W/h, without accounting for fans, disks and other stuff (stuff like GPUs, which are power hungry).

Yeah you could run it at home in theory, but you'll end up paying power at consumer price rather than datacenter pricing (and if you live in a flat, that's going to be a problem).

Unless you're really wealthy, you have your own home with sufficient power[1] delivery and cooling.

[1] not sure where you live, but here most residential power connections are below 3 KWh.

If otherwise you can point me at some datacenter that will let me run a normal server like the ones you're pointing at for like 100-150 euros/month, please DO let me know and i'll rush there first thing next business day and I will be throwing money at them.


> You can buy it but you can't run it, unless you're fairly wealthy.

Why do I need a colocation service to put a used 1U server from eBay in my house? I'd just plug it in, much like any other PC tower you might run at home.

> Unless you're really wealthy, you have your own home with sufficient power[1] delivery and cooling.

> not sure where you live, but here most residential power connections are below 3 KWh.

It's a single used 1U server, not a datacenter... It will plug into your domestic powersupply just fine. The total draw will likely be similar or even less than many gaming PC builds out there, and even then only when under peak loads etc.


Just a note, you're mixing kW and kWh.

A connection to a home wouldn't be rated in kilowatt-hours, it would likely be rated in amps, but could also be expressed in kilowatts.

> 100wh/month upgrades are like +100 euros.

I can't imagine anybody paying €1/Wh. Even if this was €1/kWh (1000x cheaper) it's still a few times more expensive than what most places would consider expensive.


You're (luckily) wrong on this. There's nothing that is stopping you to plug the server into your home power outlet. It will work just fine - ~0.5kW is almost nothing. Case in point - Dell also builds the workstations with the same hardware you see in their servers.


My server is almost never running at full tilt. It is using ~70W at idle.


Interesting, I run a small cluster of 4 mini pcs (22 cores total). I think it should be comparable to the aforementioned EPYC. Power load is a rounding error compared to appliances like electric iron at 1700W, etc. The impact on electrical bill is minimal as well. Idle power draw is about 5W per server, which translates to ~80 cents a month. Frankly my monitor uses more power on average than the rest of the homelab.


I'm pretty sure if you run a compute benchmark like Streams or hpgmg, the Epyc server will eat your mini pcs for breakfast.


You're probably right, I meant that the power consumption should be roughly comparable between them (due to inefficiency added by each mini).


I think results would be rather comparable.


That would at the very least require a hefty interconnect, I'd guess at least 25Gbit with RoCE.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: