Decentralized vs Centralized Power Delivery in Datacenters: Why Server-Level PSUs Still Dominate


2 views

Modern servers require three primary DC voltages: +12V for processor power delivery and cooling systems, +5V for legacy components and some storage devices, and +3.3V for memory and chipset operations. While the centralized power approach seems theoretically efficient, real-world implementations reveal several critical technical constraints.

A centralized system would need to distribute enormous current loads. Consider a rack with 20 servers, each drawing 30A at 12V:

// Current calculation example
const serversPerRack = 20;
const currentPerServer = 30; // Amps
const totalCurrent = serversPerRack * currentPerServer;
console.log(Total 12V current required: ${totalCurrent}A); 
// Output: Total 12V current required: 600A

Distributing 600A safely requires massive bus bars or extremely thick cables, introducing significant infrastructure challenges and voltage drop issues over distance.

Centralized power creates a single point of failure. Modern datacenter designs emphasize redundancy at multiple levels:

  • N+1 power supplies per server
  • A/B power feeds from separate PDUs
  • Dual-path power distribution

Server components have strict voltage tolerance requirements (±5% typically). Distributed power systems struggle with:

Challenge Impact
Line impedance Voltage sag under load
Transient response Inadequate for modern CPU power states
Cross-talk Noise between systems

Some hyperscalers are experimenting with 48V distribution to racks with localized conversion:

// Example power architecture pseudo-code
class PowerArchitecture {
  constructor() {
    this.distributionVoltage = 48; // V
    this.rackPDU = new RackPowerUnit();
    this.serverPSUs = [];
  }
  
  addServer(serverConfig) {
    const psu = new PointOfLoadConverter({
      input: this.distributionVoltage,
      outputs: [12, 5, 3.3]
    });
    this.serverPSUs.push(psu);
  }
}

While 12V battery UPS systems exist, modern datacenters prefer:

  1. 480V AC flywheel systems for short-term bridging
  2. Medium-voltage DC architectures for large installations
  3. Per-rack battery modules for granular backup

The efficiency gains from eliminating AC-DC conversion are often offset by the complexity of maintaining hundreds of battery strings at scale.

Centralized power conversion concentrates heat generation. Server-level PSUs allow for:

  • Distributed thermal load across racks
  • Optimized airflow per chassis
  • Granular cooling control

This becomes critical in high-density deployments where thermal management directly impacts reliability.


At first glance, a centralized DC power distribution system seems ideal for datacenters. Modern servers primarily operate on three voltage rails: +12V for CPU/GPU power, +5V for legacy peripherals, and +3.3V for RAM and chipsets. The apparent efficiency gains from eliminating redundant AC-DC conversions in individual server PSUs are tempting, but several technical constraints make this approach impractical.

Modern high-performance servers can draw over 1kW per rack unit. A 42U rack at 30kW would require 2,500 amps at 12V - exceeding the safe capacity of standard busbars and requiring impractically thick cabling. Compare this with 3-phase AC distribution at 208V where the same load requires just 83 amps per phase.

# Example current calculation for 30kW rack
def calculate_current(power, voltage):
    return power / voltage

# 12V DC system
print(f"12V system current: {calculate_current(30000, 12):.0f}A") 
# Output: 12V system current: 2500A

# 208V AC system (3-phase)
print(f"208V system current: {calculate_current(30000, 208)/3:.0f}A per phase")  
# Output: 208V system current: 83A per phase

In large-scale deployments, centralized DC systems suffer from significant voltage drops over distance. A 12V system with 100 feet of 4/0 AWG cable (typical for high-current DC) would experience approximately:

# Voltage drop calculation
def voltage_drop(current, resistance_per_foot, length):
    return current * resistance_per_foot * length * 2  # Round trip

# 4/0 AWG: 0.000049 ohms/ft
print(f"Voltage drop: {voltage_drop(2500, 0.000049, 100):.2f}V")
# Output: Voltage drop: 24.50V (yes, this would exceed 12V!)

Distributed PSU architecture provides critical fault isolation. A centralized DC system creates a single point of failure - if the main 12V supply fails, hundreds of servers go offline simultaneously. Modern server designs implement N+1 or 2N redundancy at the PSU level.

While eliminating AC-DC conversions seems efficient, modern server PSUs achieve >94% efficiency at typical loads. Centralized systems would require massive DC-DC converters at rack level, which often have worse efficiency than properly loaded server PSUs.

Some hyperscalers are experimenting with 48V rack-level distribution (reducing current by 4x compared to 12V), but this still requires per-server DC-DC conversion and introduces new challenges with legacy 5V/3.3V requirements. The Open Compute Project's 48V design shows promise but hasn't seen widespread adoption.

// Example of modern power sequencing requirements
void power_on_sequence() {
    enable_12v_rail();   // First bring up 12V
    delay(100);          // Stabilization period
    enable_5v_rail();    // Then 5V for peripherals  
    delay(50);
    enable_3v3_rail();   // Finally 3.3V for chipsets
    // Modern servers require precise timing between rails
}

While 12V battery banks could theoretically power DC systems directly, modern UPS designs use higher voltage battery strings (typically 192V or 384V) for efficiency. Converting these to server-level voltages would still incur losses comparable to AC inversion.