In AWS EC2, when attaching EBS volumes (including GP2 SSD volumes), you'll encounter two primary device naming schemes:
- Traditional SCSI-style names: /dev/sd[f-p]
- Xen virtual device names: /dev/xvd[a-z]
While both naming conventions work for EBS volume attachment, there are important technical distinctions:
# Example of mounting both device types
sudo mkfs -t ext4 /dev/sdf
sudo mount /dev/sdf /mnt/volume1
sudo mkfs -t ext4 /dev/xvdf
sudo mount /dev/xvdf /mnt/volume2
The /dev/sd* naming comes from Linux's SCSI disk subsystem. When EC2 originally launched, it used this convention. However:
- Modern EC2 instances use Xen virtualization (hence xvd prefix)
- The xvd names are more consistent with the underlying hypervisor
For GP2 volumes, there's no performance difference between the naming schemes. The storage performance is determined by:
# Check block device performance (works for both types)
sudo hdparm -tT /dev/xvdf
sudo hdparm -tT /dev/sdf
When writing automation scripts or infrastructure-as-code:
- Prefer /dev/xvd* for newer instances
- Use /dev/sd* when maintaining legacy compatibility
- Always reference devices by volume ID in AWS CLI/API calls
# AWS CLI example showing device mapping
aws ec2 attach-volume \
--volume-id vol-1234567890abcdef0 \
--instance-id i-01474ef662b89480 \
--device /dev/sdf
In cloud-init scripts, you might see device handling like this:
#!/bin/bash
DEVICE=$(lsblk -o NAME,SERIAL | grep vol123 | awk '{print "/dev/"$1}')
if [[ $DEVICE == /dev/xvd* ]]; then
echo "Using Xen device"
elif [[ $DEVICE == /dev/sd* ]]; then
echo "Using SCSI device"
fi
In AWS EC2, the evolution from /dev/sdX
to /dev/xvdX
stems from the Xen virtualization platform's architecture. Originally, Xen hypervisor used sd
(SCSI disk) prefixes, while modern implementations prefer xvd
(Xen virtual disk) for better performance and consistency.
For General Purpose SSD (gp2) volumes, both naming conventions work identically in functionality. However, key operational differences exist:
# Example: Attaching volume via AWS CLI
# Using /dev/sdf convention
aws ec2 attach-volume --volume-id vol-123456 --instance-id i-789abc --device /dev/sdf
# Using /dev/xvdf convention
aws ec2 attach-volume --volume-id vol-123456 --instance-id i-789abc --device /dev/xvdf
Modern EC2 instances (especially Nitro-based systems) show:
- 3-5% better I/O throughput with
xvd
naming - More consistent latency metrics
- Better alignment with AWS internal routing
The mapping varies by instance generation:
# Check actual device mapping
lsblk
# On newer instances, /dev/xvdf may appear as /dev/nvme1n1
# Use this for accurate identification:
sudo nvme list
1. For automation scripts, implement device detection logic:
#!/bin/bash
if [ -b "/dev/xvdf" ]; then
DEVICE="/dev/xvdf"
elif [ -b "/dev/sdf" ]; then
DEVICE="/dev/sdf"
else
echo "No available device detected"
exit 1
fi
mkfs -t xfs $DEVICE
2. When using CloudFormation, specify both mappings:
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sdf",
"Ebs": { "VolumeSize" : "100" }
},
{
"DeviceName": "/dev/xvdf",
"Ebs": { "VolumeSize" : "100" }
}
]
If volumes don't appear after attachment:
- Check kernel messages:
dmesg | grep -i sd
- Verify udev rules:
ls -la /dev/disk/by-id/
- For NVMe instances:
sudo apt install nvme-cli