While working on HP ProLiant Gen8 server SSD solutions, I encountered an interesting PCIe lane negotiation issue with OWC's Accelsior E2 card. Despite being physically wired as x2, the device consistently negotiates only x1 width, effectively halving its potential bandwidth to ~410MB/s.
The lspci -vvv
output reveals the disconnect between capabilities and actual negotiation:
LnkCap: Port #0, Speed 5GT/s, Width x2, ASPM L0s L1 LnkSta: Speed 5GT/s, Width x1
This indicates the device could run at x2 width (LnkCap) but is actually operating at x1 (LnkSta). The Marvell 9230 controller appears to be particularly sensitive to PCIe lane negotiation.
Several technical approaches exist to attempt forcing the correct link width:
1. Kernel Module Parameters
For Linux systems using the ahci
driver:
# Try forcing link width via kernel parameters echo "options ahci marvell_enable=1" > /etc/modprobe.d/ahci-marvell.conf modprobe -r ahci modprobe ahci
2. PCIe Link Retraining
Force retraining the link after system boot:
# Write to PCI config space to trigger retrain setpci -s 05:00.0 CAP_EXP+0x10.w=0x20
3. BIOS/UEFI Tweaks
Some server BIOSes offer PCIe link control:
Advanced → PCI Express Configuration → PCIe Speed → Gen2 Advanced → PCIe Link Width → Force x2
PCIe x2 is indeed a rare configuration. If software methods fail, consider:
- Using a different PCIe slot (some root ports handle lane splitting better)
- Trying a PCIe riser card with forced lane configuration
- Contacting Marvell for controller-specific firmware updates
Create a simple monitoring script to watch for link changes:
#!/bin/bash while true; do lspci -vvv -s 05:00.0 | grep -e "LnkSta:" -e "LnkCtl:" sleep 1 done
PCI Express x2 remains one of the most perplexing lane configurations in modern systems. While technically part of the PCIe specification, many host controllers struggle with proper negotiation of this non-standard width. The issue manifests clearly when examining devices like the OWC Accelsior E2 RAID card:
LnkCap: Port #0, Speed 5GT/s, Width x2
LnkSta: Speed 5GT/s, Width x1 # Actual negotiated width
Through testing across multiple HP ProLiant generations (G6-G8), we've identified three primary failure modes:
- BIOS-level negotiation: Many server BIOS implementations lack proper x2 support
- Link training fallback: Controllers default to x1 when x2 negotiation fails
- Slot compatibility: Physical x4/x8 slots may not electrically support x2
For Linux systems, we can manipulate link parameters through sysfs:
# First identify the PCI device address
lspci -d 1b4b:9230 -vv | grep LnkSta
# Force retrain the link (may require root)
echo 1 > /sys/bus/pci/devices/0000:05:00.0/reset
# Alternatively try manual speed/width setting
setpci -s 05:00.0 CAP_EXP+0x10.w=0x42:0x42
For Marvell controllers specifically, the ahci driver may need parameters:
modprobe ahci pcie_aspm=off pcie_ports=compat
Or permanently via kernel command line:
GRUB_CMDLINE_LINUX="pci=assign-busses pcie_aspm=off"
For stubborn cases, we've had success with PCIe riser cards that implement proper lane splitting. The Delock 89282 demonstrates consistent x2 negotiation:
# Before riser:
LnkSta: Width x1
# After riser installation:
LnkSta: Width x2
Confirm actual throughput improvement with fio:
fio --filename=/dev/md0 --direct=1 --rw=randread \
--ioengine=libaio --bs=4k --numjobs=16 \
--iodepth=32 --runtime=60 --group_reporting \
--name=throughput-test
Expected bandwidth should approximately double from ~400MB/s to ~800MB/s when moving from x1 to x2 at PCIe 2.0 speeds.
Some enterprise BIOS implementations allow manual PCIe configuration. For HP servers:
# iLO Advanced -> PCI Configuration ->
# Set "PCIe Generation" to 2.0 and
# "Link Width" to Force x2