Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Green revolution cooling (grcooling.com)
51 points by xearl on Nov 10, 2011 | hide | past | favorite | 34 comments


In the old days Cray had a fluid-cooled supercomputer. It was immersed in a witches brew called Fluorinert, which if I understand correctly was banned under various CFC-related treaties (edit: no mention of such banning on Wikipedia, I may be thinking of other cooling liquids).

My Dad is a radio man and has been around high-powered electronics for most of his career. He explained the pros and cons of liquid cooling this way:

Pros.

1. Theoretically very efficient.

2. Amazing equipment densities.

Cons.

1. Congratulations! You are now a plumber.

Even purified water is murderously destructive. He told me stories about replacing piping in a water cooled electronics room. Every few months they would have to shut down, drain the system and replace a pipe corner. The vortices of water going around a 90 degree bend caused enough cavitation to gouge out visible shapes on the inside of the pipes. After a few months the copper pipes would crack and leak on the electronics.

Immersion in inert liquids is fine so far as it goes, but you're still going to be a plumber.


Even purified water is murderously destructive.

Indeed, water is more destructive to materials than many other liquids. And deionized water is the worst. It turns out that you have to explicitly make deionized water for a reason: Many of the things that touch water want to dissolve in it! And the fewer ions of, say, copper that are already in your water, the faster a rod of copper dissolves when you dip it into that water.

Think of water the way you think about the blood of the aliens in Aliens – only on a slightly slower timescale – and you'll realize why boat maintenance is better thought of as a lifestyle rather than as a series of repairs.


I didn't realise that about deionised water. At least it's not toxic when gaseous (unlike fluorinert) or flammable (like mineral oil).

As for boating ... I vaguely recall the joke that it's easier and more fun just to pour cash into the sea than to get a boat.


I've seen lots of these with single machines dunked in a fishtank.

This is the first I've seen with many machines submerged.

I'd be interested to see the results of long term testing.

I'm guessing that big data centres avoid this because it's just so messy; at what point do cost savings from cooling over-ride inconvenience of dripping oil everywhere.

A final nitpick: the website is a bit frustrating, and could do with some better design, and much better photographs.


So if something breaks on a server, like a PSU unit, or a HDD, would you need to shut down the entire server, drain the liquid (or raise the server), and then swap out affected parts? Would that not be very unpractical?


they have a video demonstrating how to do server maintenance:

http://www.youtube.com/watch?v=EZmm7P1mPZs

no need to shut down the server. you pull it out and swap the parts (for front-accessible hotswap HDDs you won't even have to pull it out). cabling is designed to accommodate that.

i can imagine, though, that "pulling out" a 4U storage box fully loaded with hdds may become a bit strenuous.


Just pulling a 2U system with 2 HDDs out of an ordinary rack is strenuous if you're not it decent shape, yanking a loaded 4U box out in the manner that video shows is a quick path to pain.

I think they need to build a simple pulley or hydraulic lift system for these.


The first thing I saw when I looked at the page was:

"95% Less Cooling Power"

It took me a while to figure out that it uses 95% less power, not that is is 95% less effective.

THis could be a bit more clear.


"reducing cooling energy use by 90-95%" seems unlikely, unless you are in a cool climate and you can use outside air to cool. That is, put the radiator on the north face of your data center.

This is exactly where more conventional and less sticky methods can achieve good results, so I'm skeptical of the value.


Water has much better thermal conductivity than air so significantly warmer water can provide equivalent levels of cooling. Also, most computer components are designed to be air cooled in an office setting so they can get fairly warm with little damage. In other words to keep an 80w CPU at 45 to 50c might take water at 35c but air at 20c. If the outside air is 40c then then you only need to actively drop the water 5deg where air needs to drop 20deg and cooling systems are much more efficient for small changes in temperature than large ones. You can probably find many cases where air temperature water is perfectly acceptable where you would want chilled air to do the same thing.

PS: 40c = 104f and most cpu's are fine at 50c most are still ok at 60+c. If the outside temperature is more reasonable you can often get away with passively cooling water systems where air needs to be actively cooled. You can also combine this with other methods aka if a local lake can give you water below 30c most of the year you might be able to use a passive cooling system with water even if you would need to actively cool air.


as they claim the liquid coolant works efficiently at 40°C (104°F), using outside air to cool hardly requires a particularly cool climate. they also attribute fan removal to a significant part of the savings.


Listeners of the 5by5.tv podcasts will have heard of this concept before. That's where Midas Green Tech advertises their "virtual private servers submerged in oil".

Green Revolution Cooling supplied Midas Green Tech with their setup.


Most folks who have lurked on sites like Slashdot, Reddit, and Hacker News will have seen this before. It's a pretty old trick now. (Old in Internet time)

http://www.google.ca/search?q=computer+oil


just curious: would these machines have non-rotational media only ?


I imagine that the hard drives themselves wouldn't be filled with the oil - perhaps wrapped in a heat-conducting plastic beforehand, or maybe drives are stored outside of oil altogether.

Does anyone have any statistics on what contributes the most heat? I'm assuming power supply and processor/GPU are the worst offenders.

Edit: yup, the drives are wrapped:

> First, our technicians remove server fans and put them aside for future use or reverse-modification at a later date. Then, each hard drive is encapsulated using GRC’s proprietary hard drive sealing system. Finally, the CPU thermal grease is replaced with a non-soluble foil.


It's in the FAQ.


"hard drive sealing system" - I'm pretty sure you can't completely seal a disk, it needs some surrounding air to deal with pressure changes and such.

Wonder if they can control the pressure to the point you can seal it... or if the seal itself incorporates an air flow system.

How complex can this sealing system be while still allowing the disks to fit in the bays?


Pure speculation: The hole is for, as you said, pressure differences. The head uses air pressure to float above the platter. Changes in pressure could bring the head closer/farther away from the platter which could have unwanted consequences. Pressure fluctuations are due in part to changes in temperature. Since the temperature is regulated by the oil, and that temperature will change more slowly than with air, plugging the hole is relatively safe.


I'm all but certain disks are sealed. If air has a path to the platters then so does dust.


Air molecules are not dust-sized.


You can use a bladder to quickly equalize air pressure without exchanging air. (Air will slowly penetrate most bladders, but it's not fast enough for rapid pressure changes.)

Some heaters use a similar approach to limit corrosion. http://www.westank.com/bladder-tanks.php


That's what I was thinking too, and the pressure changes shouldn't be so drastic that the bladder would burst.


SSDs in server farms are the way to go anyway.


Wow.. really? Is this a trend with any documentation? Seems to me that there are still a lot of mentions of SSD failures floating around the tubes, and that plus the significant cost for large scale storage over traditional platters would make it a very tough sell to any organization wanting to deploy large quantities of servers.


Mechanical disk supply chains are increasing platter drive costs significantly while at the same time improvements in reliability and cost for SSDs are bringing costs lower. And yes, there have been large deployments: http://www.computerworld.com/s/article/9218811/EBay_attacks_...


the price-points for the two are way off the scale e.g. on newegg, a 120gb ssd is expensive than 1tb 7200rpm sata..


But you get tens of thousands IOPS instead of hundreds


Which may be important, or may be utterly worthless. It's like "total cost of ownership" FUD, numbers are useless outside their original context, and everybody has a different context.


This. For many high perf applications, anything outside of RAM (yes, even with SSD's) is a no-no.


Yes, this startup has been working on this idea since 2008/2009. My main fear would be much of a cost benefit this is versus the outcome of a company switching to ARM over the coming years.


It seems the servers for this site need some metaphorical mineral oil. The database is not responsive anymore :/



I read the headline as "The green revolution is slowing down".


Wonder what the long term effect is on something like polyester caps and delaminating multilayer boards

Fire suppression is going to be fun.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: