The dd
command's performance heavily depends on the block size (bs
) parameter, which determines how much data is read and written in each operation. The default 512-byte size is often suboptimal for modern storage devices.
Several hardware characteristics influence the ideal block size:
- Storage device cache size: Larger blocks better utilize cache buffers
- Disk sector size: Modern HDDs/SSDs typically use 4096-byte sectors
- Filesystem block size: Should align with your storage's native blocks
- System memory: Larger blocks require more RAM for buffering
To find your optimal size, run timing tests with various values:
# Test with 4K block size (common SSD sector size)
time dd if=/dev/sda of=/dev/null bs=4k count=1000000
# Test with 1M block size (common for HDDs)
time dd if=/dev/sda of=/dev/null bs=1M count=1000
# Test with 64K block size (good middle ground)
time dd if=/dev/sda of=/dev/null bs=64k count=10000
Based on storage type:
- Traditional HDDs: 64K-1M blocks often perform best
- SSDs/NVMe: 4K-64K blocks typically work well
- Network transfers: Match the remote system's optimal size
Combine block size with other parameters for maximum throughput:
# Optimal settings for fast SSD cloning
dd if=/dev/nvme0n1 of=/dev/nvme1n1 bs=64k conv=noerror,sync status=progress
# Adding parallel operations (iflag=direct)
dd if=/dev/sda of=/dev/sdb bs=1M iflag=direct oflag=direct status=progress
- Using blocks larger than available RAM (causes swapping)
- Mismatching block sizes between source and destination
- Ignoring error handling parameters (
conv=noerror,sync
)
When performing disk operations with dd
, the block size parameter (bs
) significantly impacts performance. The default 512-byte block size is often suboptimal for modern storage devices. Let's examine how to determine the ideal value for your hardware.
Several hardware characteristics influence the ideal block size:
- Storage device sector size (typically 4K for modern HDDs/SSDs)
- Controller buffer size
- Filesystem block size
- Available system memory
- DMA capabilities
To find your optimal value, run timed tests with various block sizes:
# Test with 4K blocks (common SSD sector size)
time dd if=/dev/sda of=/dev/null bs=4k count=1000
# Test with 1M blocks (common optimal size)
time dd if=/dev/sda of=/dev/null bs=1M count=1000
# Test with 128K blocks (good balance for many systems)
time dd if=/dev/sda of=/dev/null bs=128k count=1000
Based on empirical testing across various systems:
- Modern SSDs: 64K-1M blocks (aligned with flash pages)
- Traditional HDDs: 64K-256K blocks
- NVMe drives: 1M-4M blocks
- Network transfers: Match MTU size (typically 1500-9000)
Combine block size with other performance parameters:
# Optimal for fast local SSD copying
dd if=/dev/nvme0n1 of=/dev/nvme1n1 bs=2M iflag=direct oflag=direct status=progress
# For systems with limited RAM
dd if=/dev/sda of=/dev/sdb bs=256k conv=noerror,sync status=progress
Performance typically improves with larger blocks until reaching a plateau, then may degrade due to:
- Memory pressure
- Cache limitations
- Alignment issues
Check your storage device's optimal I/O size:
# For block devices
blockdev --getbsz /dev/sda
blockdev --getra /dev/sda
# For filesystems (if copying files)
tune2fs -l /dev/sda1 | grep Block