When configuring a budget server with mixed storage types, the RAID 1 mirror between SSD and HDD presents unique technical challenges. While this setup provides basic redundancy, the performance characteristics diverge significantly from homogeneous RAID arrays.
In most RAID 1 implementations, reads can be distributed across mirrors for improved performance. However, with SSD+HDD pairing:
- Linux MD RAID typically reads from both devices (round-robin)
- Some controllers may prioritize the faster device
- Actual throughput often caps at HDD speeds
Example Linux MD RAID read performance test:
# Check current read policy cat /sys/block/mdX/md/read_ahead_kb # Benchmark read speed (SSD+HDD vs SSD alone) hdparm -tT /dev/mdX hdparm -tT /dev/sdX # SSD device
RAID 1 requires all writes to complete on both devices before acknowledging completion. This creates:
- Write latency bound by the slower HDD
- Potential SSD wear-leveling interference
- Queue depth contention
Example write benchmark comparison:
# Test raw write speed dd if=/dev/zero of=/mnt/raid/testfile bs=1G count=1 oflag=direct # Check IO wait iostat -xmd 1
Consider these more performant alternatives:
Option 1: SSD Mirror + HDD Backup
# Daily rsync from SSD to HDD rsync -a --delete /ssd_mount/ /hdd_backup/
Option 2: Tiered Storage with bcache
# Setup bcache with SSD as cache make-bcache -B /dev/sdX -C /dev/nvme0n1
Option 3: ZFS Special Device
# ZFS pool with metadata on SSD zpool create tank mirror sda sdb special mirror nvme0n1 nvme0n2
In real-world testing of an mdadm RAID 1 with:
- Samsung 870 EVO SSD (530MB/s writes)
- WD Blue HDD (120MB/s writes)
Observed metrics:
# Typical mixed array performance 4K random write: ~85 IOPS (HDD-limited) Sequential write: ~110MB/s (HDD-limited) Read throughput: ~180MB/s (partial SSD benefit)
Modern Linux kernels (5.15+) include improved heterogeneous device handling:
# Check for write-mostly flag (mdadm) mdadm --manage /dev/mdX --write-mostly /dev/sdX # XFS allocation tweaks for mixed arrays mkfs.xfs -l size=1024b -d su=64k,sw=2 /dev/mdX
When implementing a RAID 1 array with mixed storage media (SSD + HDD), several performance characteristics emerge:
# Example Linux mdadm command for mixed RAID 1 mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
Most modern RAID implementations (like Linux mdadm) will read from the faster device when possible:
# Check read policy (typically 'favorable' for speed) cat /sys/block/md0/md/read_ahead
Typical read speeds will approximate the SSD's performance (400-550MB/s for SATA SSDs) rather than the HDD's (80-160MB/s).
The array must wait for both devices to complete writes:
# Benchmarking write speed (example using dd) dd if=/dev/zero of=/mnt/raid/testfile bs=1G count=1 oflag=direct
Expect write speeds to match the HDD's capabilities (typically 100-200MB/s), as it becomes the bottleneck.
This configuration makes sense when:
- You need some redundancy but can't afford two SSDs
- The system has mostly read operations (web servers, read-heavy databases)
- You're willing to sacrifice write speed for data protection
For better performance with redundancy, consider tiered storage:
# LVM setup with SSD cache and HDD storage pvcreate /dev/sda /dev/sdb vgcreate vg0 /dev/sda /dev/sdb lvcreate -L 20G -n cache vg0 /dev/sda lvcreate -l 100%FREE -n data vg0 /dev/sdb lvconvert --type cache-pool --poolmetadata vg0/cache vg0/data
Essential commands for maintaining mixed RAID:
# Check array status mdadm --detail /dev/md0 # Monitor disk performance iostat -xmd 1 # SMART monitoring smartctl -a /dev/sda smartctl -a /dev/sdb