Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How a Battery Cut Microsoft Datacenter Costs by a Quarter (theplatform.net)
89 points by victorbojica on Oct 11, 2015 | hide | past | favorite | 43 comments


> In a typical UPS backup scenario in a normal datacenter, the incoming power is converted from AC to DC so the battery can be charged, then converted back to AC coming out of the battery to be distributed out to the power distribution units, where it is stepped down to the 120 volts where the servers consume it. By putting the batteries in the servers, Microsoft can do one AC to DC conversion and distribute 380 volts DC directly to the Open Cloud Server power supplies and then step it down to 12 volts for the server and storage nodes.

This is a huge point of efficiency that's been missed for a very long time, mostly because it's right along the border between the datacenter provider and the server customer. The datacenter traditionally agrees to provide filtered 120/240v AC.

Converting the power loses efficiency. And we convert the power 4 times. They may be regaining about a 6.25% loss from each conversion, my math is probably incorrect.

Of course, all parties could agree to change the standard, and provide clean, filtered 12V DC to servers, and redesign PSUs to accept this input instead. But then they wouldn't be wall-pluggable anymore.


There already is a DC standard, it's for telecom and it uses 48V. You can buy PSU's off the shelf for many systems.

48V is better than 12 because the wiring gauge is smaller and line loss is more efficient.


-48V.

I'm trying to find out why negative is used, but the answers seem to vary a lot...


I've always been under the impression it was corrosion related.


This is my understanding as well. I believe the effect is called cathodic protection[1].

[1] https://en.wikipedia.org/wiki/Cathodic_protection


48V and DC power is ok for low amperage stuff like telecom. But powering servers takes a lot of watts. So the only practical way to do that would be to power a rack with high voltage AC (120/240/400) and then have a single AC-DC and supply servers in the rack with DC. The battery systems described in this article would have to be single per rack in this configuration.

Until the industry started getting making new DC-DC power supplies with batteries.


Running of DC is how Telecoms Exchanges (central offices) have run since the year dot.

So this is not exactly a new discovery.


So, the Telecom companies created a time travel device went forward to 2015 and stole Microsoft's innovation :)


No of course not must have sent the machine back pre ww2 and nicked a lot of TI's (technical instructions) from the GPO / Bell


120 or even 240V sounds low for a large server installation; I've seen 480V, 3-phase used before, as it has lower transmission losses.

http://www-03.ibm.com/procurement/proweb.nsf/objectdocswebvi...


We have 400V 3-phase going to the racks; from the PDU's in each rack the phases are split in the usual fashion and so the equipment eats 230V 1-phase as usual in this part of the world. This seems fairly common these days, all the vendors offer such PDU's off the shelf, nothing special.


Until appearance of cheap superconductors this will not make sense because of distribution losses.


The distribution losses won't be too bad if this is at the rack/row level. Of course, you wouldn't want to distribute HV DC all over the datacenter.

Though cheap high temp superconductors would be AWESOME. Can't wait :)


Don't tell Google.


I remember reading about Google doing this years ago. I've been disappointed ever since that Dell et al haven't adopted a similar option for their commodity hardware.


I worked for an enterprise hardware company in the 2010 era whose secret sauce involves doing such, albeit on a more local scale. When I joined Google, they were doing this all over. Facebook is doing this, at the rack level. Microsoft is touting their six years behind technology, and the scary thing is, they're still five years ahead of the rest of the industry.


Slightly off-topic: in one gig we are running buildfarm and CI on dozens of cheap notebooks (mostly ACER and Lenovo). And enjoying this effect of local power backup, which is very useful in our office building, because power is very unstable. When power goes down usual rack-mounted server might work tens of minutes off a UPS supply while these little bastards work for a couple of hours more, just slightly slower.


Hasn't Google been doing this for a very long time?


Yes, it was one of their "secret sauce" things that they kept top secret for a long time. Fundamentally, if you change the question from "putting computers into data centers" to "how can I build a building sized computer?" it changes the way you think. Microsoft has been somewhat late to that particular party, both Facebook (perhaps with some Google knowledge "leakage") and Amazon have done a good job of internalizing this view point.

It scares the crap out of data center providers because they know they can't really innovate like this effectively. I tried to get Equinex to consider building Open Compute racks in a different style of co-location setup but never got anywhere with them.


makes sense. their money is in cross connects and then power, at a distant second.


Google was in this particular paradigm well before Facebook, and I'm fairly certain I know precisely who was involved in leaking the knowledge.


A lot of engineers have passed through Google. The interesting thing about the Bay Area at least as this sort of porous sponge works both ways, people come from competitors, people go too competitors, they all carry a point of view and their own view on solving the problems at hand.

I am fortunate that I spent a few years in the platforms group and got a good look at how Google did what they did, and with that perspective have watched ideas in data center design emerge and flow. As the article points out Google does publish interesting bits of data, which lead to certain conclusions. And many of the same things that drove Google to do what it did, become the more likely answer when you own the entire data center (or better yet build it from scratch). I sat through a presentation of the big Ebay data center project in Utah and noted how they were coming up with the same answers to some of the same problems.

Because of that I doubt any one person is largely responsible there. There seems to have been a lot of cross pollination.


They've been doing this for over a decade. From the article:

> And Google said way back in April 2009, in a rare look at its internal datacenters, that it had not only been using containerized datacenters to boost efficiency since 2005, but had put 12 volt battery packs on its servers so they could ride out failures on local, rather than centralized, stored power. That was a decade ago, just to show you how far ahead Google can sometimes be compared to its rivals.


That will teach me to just skim the article (and to instead Ctrl-F Google :)

Apparently Microsoft did do some significant innovation compared to Google's 2009 design at least:

"The innovation that Microsoft did on this idea was to hack into the switched mode power supply used in its Open Cloud Server machines and put the battery right into the existing circuits. So the battery is not hanging off to one side, as they did in the Google servers from 2009, but is embedded in the power supply without any extra circuit costs. And importantly, the batteries are not in the power path between the electrical source and the server motherboards and components. Rather, they extend the life of the bulk capacitors in the power supply in the event of a power failure in the main feeds."


I think this article is written on purpose to make a small improvement on googles approach from 5 years ago look more impressive than it is.

Here's an image of a "google server" from 2009: http://www.altenergystocks.com/archives/2009/04/leadacid_bat... The innovation that google made was to first run the mainboards with +12V only, generating all other rails on the mainboard from this supply (that's why only yellow wires are going from the PSU to the mainboard, as can be seen on the back end). The 12V lead acid battery in the front is connected with the black and red wire to the power-supply (you can see the red and black wire also, on the far end of the picture). So the charger was already integrated into the PSU circuit back then, I'd say.

From a topological standpoint, I'd claim that this is pretty much the same thing as the microsoft-datacenter approach -- with the exception of running the powersupply with AC (except DC, as the current article seems to imply). Of course battery technology has improved in the meantime, and using LiIon would be just the smart thing to do now.


I Ctrl-F'd. I remember reading the Google article back in 2009. Wondered if they were going to mention it in this MS centric article, and was glad to see they did.


That's basically how the existing OCP server designs that Facebook contributed worked. The difference would seem to be that the previous OCP design put the battery pack together for adjacent racks at 48V instead of just one battery on each individual server.


Article for those who didn't see it in 2009: http://www.cnet.com/news/google-uncloaks-once-secret-server-...


So the function is for 4 of those batteries to run the power supply for 1 minute?


Yep. After a minute or so, either the datacenter's backup generators have started, or something has gone seriously wrong and power won't be restored for quite a while.


video http://youtu.be/31dwMAg-Hx4?t=330

slides http://www.opencompute.org/assets/OCP-Summit-V-Slides/Mainst...

specs http://www.opencompute.org/wiki/Server/SpecsAndDesigns#Speci...

The LES shall provide sufficient ride through capacity to maintain proper PSU output for 35 seconds (+/- 500ms) minimum plus walk-in period of no less than 10 seconds (+/- 500ms). The power supply shall meet the reliability and operating life with drop outs at a maximum power of 1600W for 5 seconds then reducing to 1425W for the remainder of the drop out. The power supply shall be capable of operating at greater than 1425W to 1600W continuous drop out but reliability and operating life of the battery are not guaranteed.


3 questions: 1. What's the lifetime of an individual battery? 2. Are the batteries replaceable? 3. Will we ever see this tech in consumer level PC power supplies? If we did, would it be of any use in an environment without backup generators?


1.) Judging from their cost graph [0] it seems to be about 3-4 (definitely less than 5). I'm guessing that the points where the costs increase is due to batteries wearing out and having to be replaced.

2.) Probably? The cost bumps in 1 don't make much sense otherwise.

3.) Hard to tell, in a home system there isn't such a huge cost savings to be had with a battery. As for usefulness it's a really short term battery but it'd be good enough to let the system power-down/suspend gracefully instead of losing power immediately. That niche is pretty well served by UPSs for people who care though and everyone else doesn't care already. It might find a small niche for consumers but it won't be ubiquitous.

[0] http://www.theplatform.net/wp-content/uploads/2015/03/open-c...


Now that I think about it, a power supply that held power just long enough to go into hibernation automatically would be very useful. There would need to be some signalling into the OS though.


We already have support for this in every OS, it's no different than a laptop or existing UPSs. What sucks is that to work with a generic ATX motherboard without a special header to hook into the best externally accessible bus to put it on is probably going to be USB. It's not like you would need to plug it in outside the case either, there's usually a spare USB header anyways.

But hypothetically, APC could make an ATX UPS today that would be compatible with most ATX motherboards and leverage most of their existing software and drivers to do so.


I just worked on a PC this weekend, and all the internal USB connectors went to sockets on the front panel. But it should be possible to put a socket on the back of the power supply and include a 1 foot cable. This is doable!


It would be interesting to see the power supply internalized, or externalized with a UPS battery... would probably passthrough/convert, similar to many mITX systems..


I do not understand the new dupe detection system. The same user posted the same link 10 hours ago: https://news.ycombinator.com/item?id=10368978


When there wasn't uptake on an interesting article the mods contact the original poster and ask them to repost it.


We don't consider a story a duplicate if it hasn't had significant attention yet. Otherwise the randomness of what gets (or doesn't get) seen on newest causes too many good stories to pass unnoticed.


dang's announcement thread explains it:

https://news.ycombinator.com/item?id=10223645


$0.25 doesn't really seem like that much of a savings...


:-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: