Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In a typical UPS backup scenario in a normal datacenter, the incoming power is converted from AC to DC so the battery can be charged, then converted back to AC coming out of the battery to be distributed out to the power distribution units, where it is stepped down to the 120 volts where the servers consume it. By putting the batteries in the servers, Microsoft can do one AC to DC conversion and distribute 380 volts DC directly to the Open Cloud Server power supplies and then step it down to 12 volts for the server and storage nodes.

This is a huge point of efficiency that's been missed for a very long time, mostly because it's right along the border between the datacenter provider and the server customer. The datacenter traditionally agrees to provide filtered 120/240v AC.

Converting the power loses efficiency. And we convert the power 4 times. They may be regaining about a 6.25% loss from each conversion, my math is probably incorrect.

Of course, all parties could agree to change the standard, and provide clean, filtered 12V DC to servers, and redesign PSUs to accept this input instead. But then they wouldn't be wall-pluggable anymore.



There already is a DC standard, it's for telecom and it uses 48V. You can buy PSU's off the shelf for many systems.

48V is better than 12 because the wiring gauge is smaller and line loss is more efficient.


-48V.

I'm trying to find out why negative is used, but the answers seem to vary a lot...


I've always been under the impression it was corrosion related.


This is my understanding as well. I believe the effect is called cathodic protection[1].

[1] https://en.wikipedia.org/wiki/Cathodic_protection


48V and DC power is ok for low amperage stuff like telecom. But powering servers takes a lot of watts. So the only practical way to do that would be to power a rack with high voltage AC (120/240/400) and then have a single AC-DC and supply servers in the rack with DC. The battery systems described in this article would have to be single per rack in this configuration.

Until the industry started getting making new DC-DC power supplies with batteries.


Running of DC is how Telecoms Exchanges (central offices) have run since the year dot.

So this is not exactly a new discovery.


So, the Telecom companies created a time travel device went forward to 2015 and stole Microsoft's innovation :)


No of course not must have sent the machine back pre ww2 and nicked a lot of TI's (technical instructions) from the GPO / Bell


120 or even 240V sounds low for a large server installation; I've seen 480V, 3-phase used before, as it has lower transmission losses.

http://www-03.ibm.com/procurement/proweb.nsf/objectdocswebvi...


We have 400V 3-phase going to the racks; from the PDU's in each rack the phases are split in the usual fashion and so the equipment eats 230V 1-phase as usual in this part of the world. This seems fairly common these days, all the vendors offer such PDU's off the shelf, nothing special.


Until appearance of cheap superconductors this will not make sense because of distribution losses.


The distribution losses won't be too bad if this is at the rack/row level. Of course, you wouldn't want to distribute HV DC all over the datacenter.

Though cheap high temp superconductors would be AWESOME. Can't wait :)


Don't tell Google.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: