SAS/SATA Bandwidth Allocation and Disk Expansion: Technical Deep Dive for Storage Systems


20 views

When a motherboard specifies SATA 6Gb/s with 8 ports, each port has dedicated 6Gb/s bandwidth. Unlike some network interfaces where bandwidth is shared, SATA ports operate independently. This means:

# Theoretical maximum throughput per port
SATA_III_BW = 600MB/s  # (6Gb/s = 750MB/s, accounting for 8b/10b encoding)

While each port has full bandwidth, the host controller's PCIe lane allocation creates a bottleneck. Most consumer motherboards use:

  • x4 PCIe 3.0 (4GB/s total) for SATA controller
  • x2 PCIe 4.0 (4GB/s total) in newer designs

This means with 8 ports active simultaneously:

total_throughput = min(8 * 600MB/s, 4000MB/s)  # PCIe becomes limiting factor

Standard SATA doesn't support multiple disks per port, but you can use:

  1. SATA port multipliers (limited to 5 devices per port)
  2. PCIe SATA expansion cards (recommended solution)
# Example Linux command to check SATA topology
$ lsblk -o NAME,MODEL,TRAN
sda       Samsung_SSD  sata
sdb       WD_HDD       sata

SAS controllers use expanders to achieve high device counts:

Component Function
SAS Expander Acts like a network switch for storage (12Gbps per lane)
Wide Port Combines 4x SAS lanes (48Gbps total bandwidth)

Common SAS topologies:

SAS_Controller(4 ports) → Expander(24 ports) → Disk_Enclosure(12 disks)
                                → Second_Expander(24 ports)

For optimal performance in code:

# When writing disk-intensive applications:
def configure_io_scheduler():
    # Use deadline/noop for SSDs
    # Use cfq for HDD arrays
    pass

# Recommended queue depths:
SATA_SSD_QUEUE_DEPTH = 32
SAS_HDD_QUEUE_DEPTH = 64
NVMe_QUEUE_DEPTH = 256

Remember that SAS maintains full bandwidth to each device through its switched architecture, while SATA's parallel access is limited by host controller bandwidth.


When dealing with SATA 6Gb/s interfaces on motherboards, each port operates independently at the full 6Gb/s speed (theoretical maximum). This means:

// Conceptual representation of SATA bandwidth
sata_ports = {
  port1: { max_speed: "6Gb/s", current_speed: "550MB/s" },
  port2: { max_speed: "6Gb/s", current_speed: "520MB/s" },
  // ... up to port8
}

However, the host controller's PCIe lane allocation creates a shared upstream bottleneck. For example:

  • 4x PCIe 3.0 lanes = ~4GB/s total bandwidth
  • 8 ports × 6Gb/s = 48Gb/s (~6GB/s) potential demand

SATA doesn't support daisy-chaining like SAS. To exceed 8 disks:

# Bash command to check available SATA controllers
lspci | grep -i sata
# Expected output showing multiple controllers if present

Solutions include:

  1. Adding PCIe SATA expansion cards
  2. Using SAS HBAs in IT mode (supports SATA drives)

SAS controllers achieve high device counts through:

// SAS topology example
sas_controller = {
  phys_ports: 4,
  max_devices: 64,
  expander_chips: [
    { model: "SAS2x36", upstream: "4x 12Gb/s", downstream: "36 devices"}
  ]
}

Common SAS expander configurations:

Type Ports Max Devices
Edge Expander 12-24 128
Fanout Expander 36-48 1024+

When designing storage arrays:

// Python pseudocode for bandwidth calculation
def calculate_io_contention(ports, devices_per_port):
    theoretical_bw = ports * 6Gbps
    contention_factor = min(1, devices_per_port * 0.2) # Empirical value
    return theoretical_bw * (1 - contention_factor)

Best practices:

  • SAS: Balance devices across expander zones
  • SATA: Avoid saturating the host controller's PCIe lane