Both methods you found are technically correct but represent different versions/approaches to LVM RAID implementation:
- The
-i
(stripes) and-I
(stripesize) parameters are the traditional way to create striped LVs (pre-RAID LVM era) - The
--type raid0
syntax was introduced later to provide explicit RAID support in LVM2
# Traditional striped volume approach:
lvcreate -i2 -I4 -l100%FREE -n striped_vol vg_name /dev/sda /dev/sdb
# Modern RAID0 approach:
lvcreate --type raid0 --stripes 2 --stripesize 4 -l100%FREE -n raid0_vol vg_name /dev/sda /dev/sdb
The LVM RAID implementation is indeed performed at the logical volume level rather than the volume group level. This design offers several advantages:
- Flexibility to create different RAID types for different LVs
- Ability to mix RAID and non-RAID volumes in the same VG
- Simpler management than hardware RAID or MD RAID
For your SSD-based gaming system, here's an optimized setup script:
# Create physical volumes
pvcreate /dev/sda /dev/sdc
# Create volume group
vgcreate vg_ssd /dev/sda /dev/sdc
# Create RAID0 volumes with 128KB stripe size (optimal for SSD)
lvcreate --type raid0 --stripes 2 --stripesize 128 -n root -L 100G vg_ssd
lvcreate --type raid0 --stripes 2 --stripesize 128 -n swap -L 4G vg_ssd
lvcreate --type raid0 --stripes 2 --stripesize 128 -n games -l 100%FREE vg_ssd
# Format and mount
mkfs.ext4 /dev/vg_ssd/root
mkswap /dev/vg_ssd/swap
mkfs.ext4 /dev/vg_ssd/games
After creation, verify your setup:
# Check LV attributes
lvs -o +devices,segtype,stripes,stripe_size
# Performance test
hdparm -tT /dev/vg_ssd/root
fio --filename=/dev/vg_ssd/games --rw=read --bs=128k --ioengine=libaio --iodepth=32 --runtime=60 --numjobs=4 --group_reporting --name=throughput-test
- Always maintain backups with RAID0 due to increased failure risk
- Monitor SSD wear leveling with
smartctl
- Consider adding a third SSD for RAID5 if data integrity becomes important
- For gaming, use XFS or ext4 with
noatime
mount options
This LVM-based approach provides flexibility that traditional RAID solutions can't match, while still delivering excellent performance for your gaming setup.
When implementing RAID0 with LVM, you'll encounter two distinct syntax patterns. Both are technically correct but represent different generations of LVM implementation:
# Traditional striped logical volume approach (legacy)
lvcreate -i2 -I4 -l100%FREE -nraid_lv vg_name /dev/sda /dev/sdc
# Modern explicit RAID0 declaration
lvcreate --type raid0 --stripes 2 --stripesize 4 -l100%FREE -nraid_lv vg_name /dev/sda /dev/sdc
LVM's architecture intentionally implements RAID functionality at the logical volume level rather than the volume group level for several reasons:
- Flexibility: Allows mixing different RAID levels within the same VG
- Granularity: Permits optimizing RAID configuration per use case (OS vs swap vs storage)
- Migration Path: Enables changing RAID levels without rebuilding the entire storage stack
For your specific case of two SSDs for system and gaming performance:
# Optimal SSD RAID0 configuration example:
sudo lvcreate --type raid0 --stripes 2 --stripesize 128 -n root vg_ssd /dev/nvme0n1 /dev/nvme1n1
sudo mkfs.ext4 -b 4096 -E stride=32,stripe-width=64 /dev/vg_ssd/root
Key recommendations for SSD RAID0:
- Use larger stripe sizes (128K-256K) for modern SSDs
- Align filesystem parameters with RAID geometry
- Consider using XFS for better parallel I/O performance
Here's a complete workflow for setting up your system:
# Initialize physical volumes
pvcreate /dev/nvme0n1 /dev/nvme1n1
# Create volume group
vgcreate vg_ssd /dev/nvme0n1 /dev/nvme1n1
# Create RAID0 logical volumes
lvcreate --type raid0 --stripes 2 --stripesize 128 -L 100G -n root vg_ssd
lvcreate --type raid0 --stripes 2 --stripesize 128 -L 8G -n swap vg_ssd
lvcreate --type raid0 --stripes 2 --stripesize 128 -l 100%FREE -n games vg_ssd
# Format volumes
mkfs.ext4 -b 4096 -E stride=32,stripe-width=64 /dev/vg_ssd/root
mkswap /dev/vg_ssd/swap
mkfs.ext4 -b 4096 -E stride=32,stripe-width=64 /dev/vg_ssd/games
After implementation, monitor performance with:
# Check RAID status
lvs -o name,segtype,stripes,stripe_size
# Performance testing
hdparm -tT /dev/vg_ssd/root
fio --filename=/dev/vg_ssd/games --rw=randread --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --size=1G --runtime=60
Remember to maintain regular backups as RAID0 provides no redundancy.