When performing data recovery from a failing 300GB hard drive (/dev/sdf1), I encountered storage constraints - my available drives had 150GB, 40GB, and 120GB free respectively. The solution? Split the disk image into multiple parts that can fit across these drives.
For the initial 150GB segment, we use dd
with precise byte calculations:
sudo dd if=/dev/sdf1 bs=4M count=37500 | gzip > /mnt/drive1/img1.gz
Breaking this down:
bs=4M
: 4MB block size for optimal performancecount=37500
: 37500 blocks × 4MB = 150GB- Compression via
gzip
helps maximize storage efficiency
For subsequent segments, we use dd
's skip
parameter to continue from the previous endpoint:
# 40GB segment (next 10000 blocks)
sudo dd if=/dev/sdf1 bs=4M skip=37500 count=10000 | gzip > /mnt/drive2/img2.gz
# 120GB segment (remaining data)
sudo dd if=/dev/sdf1 bs=4M skip=47500 | gzip > /mnt/drive3/img3.gz
After creating all segments, verify them with:
# Check segment sizes
gzip -l img*.gz
# Verify against original disk
sudo fdisk -l /dev/sdf1
For more flexibility, consider using split
with dd
:
sudo dd if=/dev/sdf1 bs=4M | gzip | split -b 150G - /mnt/drive1/img_part_
This automatically creates numbered files (img_part_aa, img_part_ab, etc.)
To reconstruct the original image when ready:
cat img1.gz img2.gz img3.gz | gunzip > full_image.img
Or for split files:
cat img_part_* | gunzip > full_image.img
When performing data recovery from large drives, we often encounter storage limitations. In this scenario, we need to create a disk image from a 300GB partition (/dev/sdf1) but only have multiple smaller drives available (150GB, 40GB, and 120GB). The solution requires splitting the image into manageable chunks.
The initial approach using:
sudo dd if=/dev/sdf1 bs=4096 count=150G | gzip > img1.gz
contains a syntax error in the count parameter. Let me explain the correct implementation.
For accurate splitting, we need to calculate block counts. Here's the correct approach for the first chunk:
sudo dd if=/dev/sdf1 bs=4M count=$((150*1024/4)) | gzip > img1.gz
For subsequent chunks, we use the skip
parameter to continue where we left off. Here's how to create the remaining parts:
# Second chunk (40GB)
sudo dd if=/dev/sdf1 bs=4M skip=$((150*1024/4)) count=$((40*1024/4)) | gzip > img2.gz
# Third chunk (remaining 110GB)
sudo dd if=/dev/sdf1 bs=4M skip=$(((150+40)*1024/4)) | gzip > img3.gz
After transferring all chunks to a drive with sufficient space, verify the image:
zcat img*.gz | sha256sum
sudo shasum -a 256 /dev/sdf1
The checksums should match.
For more flexibility, consider using split
command:
sudo dd if=/dev/sdf1 bs=4M | gzip | split -b 150G - img_part.gz
This automatically creates multiple files with sequential naming.
To restore the original image:
cat img*.gz | gunzip | sudo dd of=/dev/sdf2 bs=4M