> In a typical UPS backup scenario in a normal datacenter, the incoming power is converted from AC to DC so the battery can be charged, then converted back to AC coming out of the battery to be distributed out to the power distribution units, where it is stepped down to the 120 volts where the servers consume it. By putting the batteries in the servers, Microsoft can do one AC to DC conversion and distribute 380 volts DC directly to the Open Cloud Server power supplies and then step it down to 12 volts for the server and storage nodes.
This is a huge point of efficiency that's been missed for a very long time, mostly because it's right along the border between the datacenter provider and the server customer. The datacenter traditionally agrees to provide filtered 120/240v AC.
Converting the power loses efficiency. And we convert the power 4 times. They may be regaining about a 6.25% loss from each conversion, my math is probably incorrect.
Of course, all parties could agree to change the standard, and provide clean, filtered 12V DC to servers, and redesign PSUs to accept this input instead. But then they wouldn't be wall-pluggable anymore.
48V and DC power is ok for low amperage stuff like telecom. But powering servers takes a lot of watts. So the only practical way to do that would be to power a rack with high voltage AC (120/240/400) and then have a single AC-DC and supply servers in the rack with DC. The battery systems described in this article would have to be single per rack in this configuration.
Until the industry started getting making new DC-DC power supplies with batteries.
We have 400V 3-phase going to the racks; from the PDU's in each rack the phases are split in the usual fashion and so the equipment eats 230V 1-phase as usual in this part of the world. This seems fairly common these days, all the vendors offer such PDU's off the shelf, nothing special.
This is a huge point of efficiency that's been missed for a very long time, mostly because it's right along the border between the datacenter provider and the server customer. The datacenter traditionally agrees to provide filtered 120/240v AC.
Converting the power loses efficiency. And we convert the power 4 times. They may be regaining about a 6.25% loss from each conversion, my math is probably incorrect.
Of course, all parties could agree to change the standard, and provide clean, filtered 12V DC to servers, and redesign PSUs to accept this input instead. But then they wouldn't be wall-pluggable anymore.