How to Analyze and Visualize Disk Usage on Synology NAS Using Programming Tools


2 views

Synology NAS devices use a specialized Linux-based operating system (DSM) with a custom file system structure. The most common way to check disk usage is through the DSM GUI, but developers often need programmatic access for automation or advanced analysis.

The most straightforward method is using SSH to access your Synology and run standard Linux commands:

# Get overall disk usage
df -h

# Analyze folder sizes in current directory
du -sh *

# Detailed recursive analysis (may take time)
du -ah /volume1 | sort -rh | head -20

For more sophisticated analysis, we can write a Python script to run on the Synology (requires Python package installation via Synology's Package Center):

import os
import humanize
from collections import defaultdict

def analyze_disk_usage(path='/volume1', max_depth=3):
    size_dict = defaultdict(int)
    
    for root, dirs, files in os.walk(path):
        depth = root.count(os.sep) - path.count(os.sep)
        if depth > max_depth:
            continue
            
        for f in files:
            fp = os.path.join(root, f)
            try:
                size = os.path.getsize(fp)
                size_dict[root] += size
            except:
                continue
    
    # Sort and format results
    sorted_sizes = sorted(size_dict.items(), key=lambda x: x[1], reverse=True)
    for path, size in sorted_sizes[:10]:
        print(f"{humanize.naturalsize(size).rjust(10)} {path}")

if __name__ == '__main__':
    analyze_disk_usage()

For teams that need shared visibility, we can create a simple web interface:

from flask import Flask, render_template
import subprocess

app = Flask(__name__)

@app.route('/diskusage')
def disk_usage():
    result = subprocess.run(['du', '-sh', '/volume1/*'], 
                          stdout=subprocess.PIPE)
    data = result.stdout.decode('utf-8').split('\n')
    return render_template('usage.html', items=data)

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

For enterprise environments, SNMP provides standardized monitoring:

# Enable SNMP in DSM Control Panel
# Then query from monitoring system:
snmpwalk -v 2c -c public your-synology-ip .1.3.6.1.4.1.6574.3

Popular monitoring tools can integrate with Synology:

  • Prometheus with Synology exporter
  • Grafana dashboards
  • Telegraf data collector

Synology NAS devices run on DiskStation Manager (DSM), a Linux-based operating system. While DSM provides a graphical interface for basic storage monitoring, power users often need more granular control. The underlying storage system typically uses Btrfs or ext4 filesystems with LVM or RAID configurations.

SSH into your Synology device and use these built-in Linux tools:

# Check overall disk usage
df -h

# Analyze directory sizes (human-readable)
du -sh /*

# Find largest files (top 20)
find / -type f -exec du -h {} + | sort -rh | head -n 20

For more sophisticated analysis, create this Python script (save as disk_analyzer.py):

#!/usr/bin/env python3
import os
import sys
from collections import defaultdict

def get_dir_size(path='.'):
    total = 0
    with os.scandir(path) as it:
        for entry in it:
            if entry.is_file():
                total += entry.stat().st_size
            elif entry.is_dir():
                total += get_dir_size(entry.path)
    return total

def analyze_storage(path, depth=3):
    sizes = defaultdict(int)
    
    for root, dirs, files in os.walk(path):
        level = root.replace(path, '').count(os.sep)
        if level < depth:
            size = sum(os.path.getsize(os.path.join(root, f)) for f in files)
            sizes[root] = size + sum(
                get_dir_size(os.path.join(root, d)) for d in dirs
                if os.path.join(root, d).count(os.sep) - path.count(os.sep) < depth
            )
    
    return sorted(sizes.items(), key=lambda x: x[1], reverse=True)

if __name__ == "__main__":
    target = sys.argv[1] if len(sys.argv) > 1 else '/volume1'
    results = analyze_storage(target)
    
    print(f"\nDisk Usage Analysis for {target}:")
    print("-" * 50)
    for path, size in results:
        print(f"{path:60} {size/1024/1024:.2f} MB")

Create a daily report with this bash script:

#!/bin/bash
REPORT_FILE="/volume1/disk_reports/$(date +%Y-%m-%d)_disk_report.txt"

echo "=== Synology Disk Usage Report $(date) ===" > $REPORT_FILE
echo "" >> $REPORT_FILE

echo "Top 20 Space-Consuming Directories:" >> $REPORT_FILE
du -h /volume1/* | sort -rh | head -n 20 >> $REPORT_FILE

echo "" >> $REPORT_FILE
echo "Largest Files (>100MB):" >> $REPORT_FILE
find /volume1 -type f -size +100M -exec du -h {} + | sort -rh >> $REPORT_FILE

# Email the report (requires mailutils setup)
mail -s "Daily Disk Usage Report" admin@example.com < $REPORT_FILE

For those preferring GUI tools, consider these options:

  • Install Synology Storage Analyzer from Package Center
  • Set up NetData through Docker for real-time monitoring
  • Use ncdu (NCurses Disk Usage) via SSH

When analyzing disk usage:

  • Schedule resource-intensive scans during off-peak hours
  • Be cautious when deleting system files or @ directories
  • Consider snapshot impact on reported storage usage
  • Monitor /var/log/ for system logs that might grow unexpectedly