Filesystem testing is a critical process when migrating between storage systems. The core aspects to validate include:
- Data integrity verification
- Performance benchmarking
- Concurrency handling
- Failure recovery
- Permission and ACL validation
For Linux-based systems, consider these powerful testing utilities:
# Stress testing with fsstress
./fsstress -d /mnt/test -n 1000 -p 50 -v
# I/O performance measurement
fio --filename=/mnt/test/testfile --size=1G --rw=randrw --bs=4k --direct=1 --ioengine=libaio --runtime=60 --name=testjob
# Filesystem consistency check
xfs_repair -n /dev/sdX
When moving from ext3 to EFS, pay special attention to:
# Test symbolic link preservation
ln -s source.txt link.txt
rsync -avz /source/path/ user@remote:/efs/mount/
# Verify hard link counts
find /mnt/efs -type f -links +1 -exec ls -li {} \;
# Validate extended attributes
getfattr -d -m - /mnt/efs/important_file
Consider implementing a Python-based test harness:
import unittest
import os
import subprocess
class TestEFSMigration(unittest.TestCase):
def setUp(self):
self.test_dir = "/mnt/efs/test_area"
os.makedirs(self.test_dir, exist_ok=True)
def test_large_file_operations(self):
test_file = os.path.join(self.test_dir, "large.bin")
with open(test_file, 'wb') as f:
f.write(os.urandom(1024*1024*100)) # 100MB file
md5_original = subprocess.check_output(['md5sum', test_file])
os.rename(test_file, test_file + '.renamed')
md5_renamed = subprocess.check_output(['md5sum', test_file + '.renamed'])
self.assertEqual(md5_original, md5_renamed)
if __name__ == '__main__':
unittest.main()
Create a comparative benchmark script:
#!/bin/bash
# Test parameters
TEST_FILE="/mnt/efs/benchmark.dat"
BLOCK_SIZES="4k 8k 16k 64k 1m"
TEST_SIZE="1G"
echo "Filesystem Benchmark Results"
echo "---------------------------"
for bs in $BLOCK_SIZES; do
echo -n "Block size $bs write: "
dd if=/dev/zero of=$TEST_FILE bs=$bs count=$(($TEST_SIZE / $bs)) conv=fdatasync 2>&1 | tail -1
rm -f $TEST_FILE
echo -n "Block size $bs read: "
dd if=/dev/zero of=$TEST_FILE bs=$bs count=$(($TEST_SIZE / $bs)) > /dev/null 2>&1
dd if=$TEST_FILE of=/dev/null bs=$bs 2>&1 | tail -1
rm -f $TEST_FILE
done
Test crash recovery scenarios:
# Simulate power failure during write
#!/bin/bash
TEST_FILE="/mnt/efs/crash_test.db"
echo "Beginning crash test..."
sqlite3 $TEST_FILE "CREATE TABLE test(id INTEGER PRIMARY KEY, data TEXT);"
# Start infinite insert loop
sqlite3 $TEST_FILE "BEGIN; \
WITH RECURSIVE cnt(x) AS (VALUES(1) UNION ALL SELECT x+1 FROM cnt WHERE x<100000) \
INSERT INTO test(data) SELECT randomblob(1000) FROM cnt;" &
# Kill process abruptly after delay
sleep 5
pkill -9 sqlite3
# Verify database integrity
sqlite3 $TEST_FILE "PRAGMA integrity_check;" > integrity.log
Validate permission handling across users:
# Permission test script
#!/bin/bash
TEST_DIR="/mnt/efs/permission_test"
mkdir -p $TEST_DIR
# Test different permission scenarios
for perm in 755 700 644 600; do
touch "$TEST_DIR/test_$perm"
chmod $perm "$TEST_DIR/test_$perm"
sudo -u nobody ls -l "$TEST_DIR/test_$perm" >> permission_test.log 2>&1
sudo -u nobody cat "$TEST_DIR/test_$perm" >> permission_test.log 2>&1
done
When planning a filesystem migration (like ext3 to EFS), comprehensive testing should cover:
- Performance benchmarks (IOPS, throughput, latency)
- Data integrity verification
- Concurrent access patterns
- Failure recovery scenarios
- Permission and ACL handling
For Linux-based systems, these tools provide comprehensive coverage:
# Basic disk benchmarking
sudo hdparm -Tt /dev/sdX
# Filesystem stress testing
sudo apt-get install fio
fio --name=test --ioengine=libaio --rw=randrw --bs=4k --numjobs=16 --size=1G --runtime=300 --group_reporting
When moving from ext3 to EFS (Amazon Elastic File System), consider:
# Mount comparison testing
sudo mount -t ext3 /dev/sdX /mnt/ext3
sudo mount -t nfs4 fs-XXXXXXXX.efs.aws-region.amazonaws.com:/ /mnt/efs
# Permission mapping verification
getfacl /mnt/ext3/testfile > ext3_acl.txt
setfacl --restore=ext3_acl.txt /mnt/efs/testfile
Python script for basic filesystem verification:
import os
import hashlib
import unittest
class FilesystemTest(unittest.TestCase):
TEST_FILE = "/mnt/test_fs/testfile"
def test_write_read(self):
test_data = os.urandom(1024)
with open(self.TEST_FILE, 'wb') as f:
f.write(test_data)
with open(self.TEST_FILE, 'rb') as f:
read_data = f.read()
self.assertEqual(
hashlib.md5(test_data).hexdigest(),
hashlib.md5(read_data).hexdigest()
)
if __name__ == '__main__':
unittest.main()
Key metrics to capture during testing:
# Capture latency metrics using iostat
iostat -xmdz 1
# Network-specific metrics for EFS
sudo apt-get install nfs-common
nfsiostat 1
For enterprise deployments, consider:
- Failover testing with multiple AZs
- Encryption/decryption performance impact
- Mixed workload patterns (small vs large files)
- Long-term stability testing (7+ days)