In the old days Cray had a fluid-cooled supercomputer. It was immersed in a witches brew called Fluorinert, which if I understand correctly was banned under various CFC-related treaties (edit: no mention of such banning on Wikipedia, I may be thinking of other cooling liquids).
My Dad is a radio man and has been around high-powered electronics for most of his career. He explained the pros and cons of liquid cooling this way:
Pros.
1. Theoretically very efficient.
2. Amazing equipment densities.
Cons.
1. Congratulations! You are now a plumber.
Even purified water is murderously destructive. He told me stories about replacing piping in a water cooled electronics room. Every few months they would have to shut down, drain the system and replace a pipe corner. The vortices of water going around a 90 degree bend caused enough cavitation to gouge out visible shapes on the inside of the pipes. After a few months the copper pipes would crack and leak on the electronics.
Immersion in inert liquids is fine so far as it goes, but you're still going to be a plumber.
Indeed, water is more destructive to materials than many other liquids. And deionized water is the worst. It turns out that you have to explicitly make deionized water for a reason: Many of the things that touch water want to dissolve in it! And the fewer ions of, say, copper that are already in your water, the faster a rod of copper dissolves when you dip it into that water.
Think of water the way you think about the blood of the aliens in Aliens – only on a slightly slower timescale – and you'll realize why boat maintenance is better thought of as a lifestyle rather than as a series of repairs.
I've seen lots of these with single machines dunked in a fishtank.
This is the first I've seen with many machines submerged.
I'd be interested to see the results of long term testing.
I'm guessing that big data centres avoid this because it's just so messy; at what point do cost savings from cooling over-ride inconvenience of dripping oil everywhere.
A final nitpick: the website is a bit frustrating, and could do with some better design, and much better photographs.
So if something breaks on a server, like a PSU unit, or a HDD, would you need to shut down the entire server, drain the liquid (or raise the server), and then swap out affected parts?
Would that not be very unpractical?
no need to shut down the server. you pull it out and swap the parts (for front-accessible hotswap HDDs you won't even have to pull it out). cabling is designed to accommodate that.
i can imagine, though, that "pulling out" a 4U storage box fully loaded with hdds may become a bit strenuous.
Just pulling a 2U system with 2 HDDs out of an ordinary rack is strenuous if you're not it decent shape, yanking a loaded 4U box out in the manner that video shows is a quick path to pain.
I think they need to build a simple pulley or hydraulic lift system for these.
"reducing cooling energy use by 90-95%" seems unlikely, unless you are in a cool climate and you can use outside air to cool. That is, put the radiator on the north face of your data center.
This is exactly where more conventional and less sticky methods can achieve good results, so I'm skeptical of the value.
Water has much better thermal conductivity than air so significantly warmer water can provide equivalent levels of cooling. Also, most computer components are designed to be air cooled in an office setting so they can get fairly warm with little damage. In other words to keep an 80w CPU at 45 to 50c might take water at 35c but air at 20c. If the outside air is 40c then then you only need to actively drop the water 5deg where air needs to drop 20deg and cooling systems are much more efficient for small changes in temperature than large ones. You can probably find many cases where air temperature water is perfectly acceptable where you would want chilled air to do the same thing.
PS: 40c = 104f and most cpu's are fine at 50c most are still ok at 60+c. If the outside temperature is more reasonable you can often get away with passively cooling water systems where air needs to be actively cooled. You can also combine this with other methods aka if a local lake can give you water below 30c most of the year you might be able to use a passive cooling system with water even if you would need to actively cool air.
as they claim the liquid coolant works efficiently at 40°C (104°F), using outside air to cool hardly requires a particularly cool climate. they also attribute fan removal to a significant part of the savings.
Listeners of the 5by5.tv podcasts will have heard of this concept before. That's where Midas Green Tech advertises their "virtual private servers submerged in oil".
Green Revolution Cooling supplied Midas Green Tech with their setup.
Most folks who have lurked on sites like Slashdot, Reddit, and Hacker News will have seen this before. It's a pretty old trick now. (Old in Internet time)
I imagine that the hard drives themselves wouldn't be filled with the oil - perhaps wrapped in a heat-conducting plastic beforehand, or maybe drives are stored outside of oil altogether.
Does anyone have any statistics on what contributes the most heat? I'm assuming power supply and processor/GPU are the worst offenders.
Edit: yup, the drives are wrapped:
> First, our technicians remove server fans and put them aside for future use or reverse-modification at a later date. Then, each hard drive is encapsulated using GRC’s proprietary hard drive sealing system. Finally, the CPU thermal grease is replaced with a non-soluble foil.
Pure speculation: The hole is for, as you said, pressure differences. The head uses air pressure to float above the platter. Changes in pressure could bring the head closer/farther away from the platter which could have unwanted consequences. Pressure fluctuations are due in part to changes in temperature. Since the temperature is regulated by the oil, and that temperature will change more slowly than with air, plugging the hole is relatively safe.
You can use a bladder to quickly equalize air pressure without exchanging air. (Air will slowly penetrate most bladders, but it's not fast enough for rapid pressure changes.)
Wow.. really? Is this a trend with any documentation?
Seems to me that there are still a lot of mentions of SSD failures floating around the tubes, and that plus the significant cost for large scale storage over traditional platters would make it a very tough sell to any organization wanting to deploy large quantities of servers.
Mechanical disk supply chains are increasing platter drive costs significantly while at the same time improvements in reliability and cost for SSDs are bringing costs lower. And yes, there have been large deployments: http://www.computerworld.com/s/article/9218811/EBay_attacks_...
Which may be important, or may be utterly worthless. It's like "total cost of ownership" FUD, numbers are useless outside their original context, and everybody has a different context.
Yes, this startup has been working on this idea since 2008/2009. My main fear would be much of a cost benefit this is versus the outcome of a company switching to ARM over the coming years.
My Dad is a radio man and has been around high-powered electronics for most of his career. He explained the pros and cons of liquid cooling this way:
Pros.
1. Theoretically very efficient.
2. Amazing equipment densities.
Cons.
1. Congratulations! You are now a plumber.
Even purified water is murderously destructive. He told me stories about replacing piping in a water cooled electronics room. Every few months they would have to shut down, drain the system and replace a pipe corner. The vortices of water going around a 90 degree bend caused enough cavitation to gouge out visible shapes on the inside of the pipes. After a few months the copper pipes would crack and leak on the electronics.
Immersion in inert liquids is fine so far as it goes, but you're still going to be a plumber.