Modern storage devices exhibit varying performance characteristics based on physical location. For traditional HDDs, outer tracks provide higher throughput due to constant angular velocity (CAV) geometry. Even SSDs show location-dependent performance due to chip/channel parallelism and wear-leveling algorithms.
The filefrag
utility provides physical extent information for ext2/3/4 filesystems:
# filefrag -v /path/to/file
Filesystem type is: ef53
File size of /path/to/file is 1048576 (256 blocks, blocksize 4096)
ext logical physical expected length flags
0 0 1234567 1234567 32 eof
For XFS, use xfs_bmap
:
xfs_bmap -v /path/to/file
When filesystem tools aren't sufficient, debugfs provides raw access to ext filesystem metadata:
debugfs -R "stat /path/to/file" /dev/sdX
debugfs -R "icheck <inode_number>" /dev/sdX
Here's a Python script to correlate logical blocks to physical sectors:
import os
import subprocess
def get_physical_blocks(file_path):
result = subprocess.run(['filefrag', '-v', file_path],
stdout=subprocess.PIPE)
lines = result.stdout.decode().split('\n')
physical_blocks = []
for line in lines:
if line.startswith(' '):
parts = line.split()
if len(parts) >= 4:
physical_blocks.append(int(parts[3]))
return physical_blocks
print(get_physical_blocks('/var/log/syslog'))
When benchmarking, account for these factors:
- Disk rotation (for HDDs)
- Zone bit recording (ZBR) implementations
- Filesystem journaling overhead
- Controller caching behavior
For raw device testing without filesystem interference:
dd if=/dev/sdX of=/dev/null bs=4k count=1000 skip=$((start_sector/8))
Remember that modern storage stacks (especially with SSDs) introduce additional abstraction layers that may affect results.
When working with traditional spinning disks (HDDs), the physical location of data significantly impacts I/O performance. Due to constant rotational speed (typically 5400 or 7200 RPM) and near-constant data density, the outer tracks provide higher throughput than inner tracks. Modern SSDs don't have this mechanical limitation, but their performance can still vary based on wear-leveling and flash architecture.
For ext4 filesystems (the default on most Linux distributions), we can use the filefrag
utility to determine physical block allocation:
filefrag -v /path/to/your/file
# Example output:
# Filesystem type is: ef53
# File size of testfile is 1048576 (256 blocks, blocksize 4096)
# ext logical physical expected length flags
# 0 0 34816 256
# 1 256 39424 256
# 2 512 39424 256
The physical
column shows the starting block numbers on disk. For XFS, use xfs_bmap
:
xfs_bmap -v /path/to/xfs_file
For deeper analysis with ext4, use debugfs
:
debugfs /dev/sdX
debugfs: icheck <inode_number>
debugfs: stat /path/to/file # First get inode number
debugfs: bmap <inode_number> <logical_block>
Once you have physical block numbers, you can calculate approximate physical position:
# Determine disk geometry
hdparm -g /dev/sdX
# Calculate physical position in bytes
physical_position=$((block_number * block_size))
# Calculate percentage from start
total_blocks=$(blockdev --getsz /dev/sdX)
position_percentage=$((100 * physical_position / (total_blocks * block_size)))
To test performance at different disk locations, use fio
with explicit offsets:
[global]
ioengine=libaio
direct=1
runtime=60
filename=/dev/sdX
size=1G
[outer]
offset=0
offset_increment=0
[middle]
offset=50%
offset_increment=0
[inner]
offset=90%
offset_increment=0
For ZFS:
zdb -dddd poolname path/to/file
For Btrfs:
btrfs inspect-internal logical-resolve -v <logical_address> /path
Modern storage systems introduce several complications:
- RAID configurations may abstract physical locations
- SSD controllers perform wear-leveling and block remapping
- Filesystems may use techniques like ext4's multiblock allocator
- Logical volume managers add another abstraction layer