Python: Detect Mounted Volumes Before Filesystem Operations to Prevent Backup Errors


2 views

When dealing with filesystem operations in Linux/Unix systems, a common pitfall is assuming a directory path automatically means a specific storage device. The system will happily let you write to /external-backup whether it's mounted or not - if unmounted, it simply becomes a regular directory on your root filesystem.

This isn't a bug but a fundamental Unix design principle:

  • Mount points are just directories until something gets mounted on them
  • The OS maintains no persistent "this should be mounted" state
  • All path resolution happens at the moment of access

Here are three reliable ways to check mount status in Python:

Method 1: Using psutil


import psutil

def is_mounted(path):
    for partition in psutil.disk_partitions():
        if partition.mountpoint == path:
            return True
    return False

if not is_mounted('/external-backup'):
    raise SystemExit("Backup volume not mounted!")

Method 2: Direct Filesystem Inspection


import os
import stat

def is_mounted(path):
    path_stat = os.stat(path)
    parent_stat = os.stat(os.path.dirname(path.rstrip(os.sep)))
    return path_stat.st_dev != parent_stat.st_dev

if not is_mounted('/external-backup'):
    print("Writing to fallback location instead")

Method 3: Checking /proc/mounts


def is_mounted(path):
    path = os.path.abspath(path)
    with open('/proc/mounts') as f:
        for line in f:
            device, mount_point, rest = line.split(' ', 2)
            if os.path.abspath(mount_point) == path:
                return True
    return False

When you later mount the external drive:

  • Existing files in the mount point directory become invisible
  • Newly mounted files take precedence
  • No automatic merging occurs - this is why you must check mount status

For critical backup systems, consider this comprehensive check:


import os
import subprocess
from pathlib import Path

def validate_backup_target(path, expected_fs=None, min_size_gb=10):
    path = Path(path)
    
    # Check mount status
    if not any(mount.mountpoint == str(path) 
               for mount in psutil.disk_partitions()):
        raise RuntimeError(f"{path} is not mounted")
    
    # Verify filesystem type if specified
    if expected_fs:
        mount = next(m for m in psutil.disk_partitions() 
                    if m.mountpoint == str(path))
        if mount.fstype != expected_fs:
            raise RuntimeError(f"Expected {expected_fs} but got {mount.fstype}")
    
    # Check available space
    usage = psutil.disk_usage(str(path))
    if usage.free < min_size_gb * (1024 ** 3):
        raise RuntimeError(f"Insufficient space (need {min_size_gb}GB)")
    
    return True

For long-running processes, consider monitoring mount changes:


import pyinotify

class MountEventHandler(pyinotify.ProcessEvent):
    def process_IN_MOUNT(self, event):
        print(f"Mount detected at {event.pathname}")
    
    def process_IN_UNMOUNT(self, event):
        print(f"Unmount detected at {event.pathname}")

wm = pyinotify.WatchManager()
handler = MountEventHandler()
notifier = pyinotify.Notifier(wm, handler)
wdd = wm.add_watch('/', pyinotify.IN_MOUNT | pyinotify.IN_UNMOUNT)
notifier.loop()

When writing backup scripts in Python, one critical assumption developers often make is that mount points will always contain the intended storage device. As you've discovered, Linux/Unix systems will happily let you write to a directory even when its intended mount point isn't active - the data simply gets written to the parent filesystem.

Here are several reliable methods to check mount status programmatically:


import os
import subprocess
from pathlib import Path

def is_mounted_psutil(mount_point):
    """Method 1: Using psutil (recommended for cross-platform)"""
    import psutil
    mount_point = os.path.abspath(mount_point)
    for part in psutil.disk_partitions():
        if os.path.abspath(part.mountpoint) == mount_point:
            return True
    return False

def is_mounted_proc(mount_point):
    """Method 2: Parsing /proc/mounts (Linux specific)"""
    mount_point = os.path.abspath(mount_point)
    with open("/proc/mounts") as f:
        for line in f:
            _, mounted_on, _, _, _, _ = line.split()
            if os.path.abspath(mounted_on) == mount_point:
                return True
    return False

def is_mounted_df(mount_point):
    """Method 3: Using df command (works on most Unix-like systems)"""
    try:
        output = subprocess.check_output(["df", mount_point], stderr=subprocess.PIPE)
        return True
    except subprocess.CalledProcessError:
        return False

Here's how to integrate mount verification into your backup workflow:


BACKUP_DIR = "/external-backup"

def verify_mount_point():
    if not is_mounted_psutil(BACKUP_DIR):
        raise RuntimeError(
            f"Backup directory {BACKUP_DIR} is not mounted. "
            "Please connect the external drive and mount it properly."
        )
    # Additional checks
    stat = os.statvfs(BACKUP_DIR)
    available_gb = (stat.f_bavail * stat.f_frsize) / (1024 ** 3)
    if available_gb < 100:  # Example: Require at least 100GB free
        raise RuntimeError("Insufficient space on backup device")

def safe_backup_operation():
    try:
        verify_mount_point()
        # Proceed with actual backup operations
        print("Backup location verified, starting backup...")
    except RuntimeError as e:
        print(f"Backup aborted: {str(e)}")
        sys.exit(1)

The reason your original script worked without errors lies in Unix filesystem semantics:

  • Mount points are just regular directories until something is mounted on them
  • The system maintains separate namespace for mount points and regular directories
  • When you write to an unmounted mount point, data goes to the parent filesystem

When you eventually mount the external drive at /external-backup:

  • The existing data on the internal drive becomes hidden (not deleted)
  • The mount operation creates a new view of the filesystem at that point
  • Original data reappears if you unmount the external drive
  • This can lead to confusion about which version of files is "current"

Beyond just checking mount status, consider these enhancements:


def enhanced_mount_check(mount_point, expected_fs=None, min_size_gb=None):
    """Advanced mount verification with filesystem type and size checks"""
    if not os.path.isdir(mount_point):
        raise ValueError(f"{mount_point} is not a directory")
    
    # Get mount information
    try:
        df_output = subprocess.check_output(
            ["df", "--output=source,fstype,size,avail", mount_point],
            stderr=subprocess.PIPE
        ).decode().splitlines()
    except subprocess.CalledProcessError:
        return False
    
    if len(df_output) < 2:
        return False
    
    _, fstype, size_blocks, avail_blocks = df_output[1].split()
    
    if expected_fs and fstype != expected_fs:
        raise ValueError(
            f"Expected filesystem {expected_fs}, found {fstype}"
        )
    
    if min_size_gb:
        block_size = 1024  # Typical df block size
        available_gb = (int(avail_blocks) * block_size) / (1024 ** 3)
        if available_gb < min_size_gb:
            raise ValueError(
                f"Only {available_gb:.1f}GB available, "
                f"need at least {min_size_gb}GB"
            )
    
    return True