Recently after updating my Synology NAS (DSM 6.2.4 to 7.0), my long-working rsync backup script started failing with:
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32)
ERROR: module is read only
The confusing part? SSH key-based authentication still worked perfectly:
ssh remote_backup 'touch /volume1/backups/test.txt' # Success
rsync test.txt remote_backup:backups/ # Fails with read-only error
After digging through Synology's release notes and testing various scenarios, I discovered DSM 7.0 introduced stricter permissions for rsync daemon modules. Even when using direct SSH rsync (not rsyncd), the NAS now checks for write permissions at multiple levels.
Three critical checks needed:
- User permissions on target directory
- AppArmor/SELinux context (if enabled)
- Parent directory execute bits (for path traversal)
1. Verify Remote Directory Permissions
First, check the actual permissions on your NAS:
ssh remote_backup 'ls -ld /volume1/backups'
# Should show: drwxrwxr-x+
If permissions are restrictive, fix them:
ssh remote_backup 'chmod -R 775 /volume1/backups && chown -R backupuser:backupgroup /volume1/backups'
2. Check for Extended Attributes
Synology uses ACLs extensively. View them with:
ssh remote_backup 'getfacl /volume1/backups'
Example output showing proper ACLs:
# file: volume1/backups
# owner: backupuser
# group: backupgroup
user::rwx
user:backupuser:rwx
group::r-x
mask::rwx
other::r-x
3. Rsync Command Modifications
Add these flags to your rsync command:
rsync -avz --no-o --no-g --chmod=Du=rwx,Dg=rwx,Do=rx,Fu=rw,Fg=rw,Fo=r \
--rsync-path="rsync --fake-super" \
/local/path/ remote_backup:/volume1/backups/
Key parameters:
--fake-super
: Preserves permissions without requiring root--chmod
: Explicitly sets permissions during transfer--no-o/--no-g
: Prevents permission conflicts
If using rsync in daemon mode (unlikely in this SSH scenario), ensure your /etc/rsyncd.conf
contains:
[backups]
path = /volume1/backups
read only = no
list = yes
uid = backupuser
gid = backupgroup
auth users = backupuser
secrets file = /etc/rsyncd.secrets
Test with a simple file transfer:
# Create test file
dd if=/dev/urandom of=testfile.bin bs=1M count=10
# Transfer with verbose output
rsync -vvv testfile.bin remote_backup:/volume1/backups/
Successful output should show:
sent 10.45M bytes received 35 bytes 2.09M bytes/sec
total size is 10.00M speedup is 0.96
I recently encountered a frustrating issue where my long-functioning rsync-over-SSH backup script started failing with the cryptic message "ERROR: module is read only" after updating my Synology NAS software. Here's how I diagnosed and resolved the problem.
The error manifested when running my standard rsync command:
rsync -ab --recursive \
--files-from="$FILES_FROM" \
--backup-dir=backup_$SUFFIX \
--delete \
--filter='protect backup_*' \
$WDIRECTORY/ \
remote_backup:$REMOTE_BACKUP/
Key observations:
- SSH key authentication still worked perfectly
- Manual file operations via SSH succeeded
- The issue only occurred during rsync transfers
After extensive testing, I discovered that my NAS update had automatically enabled rsyncd (the rsync daemon) with default configurations. The error occurs when:
- Rsync interprets the destination path (remote_backup:path) as a module reference
- The default rsyncd.conf has
read only = yes
- No matching module exists in the configuration
Here are the possible approaches I found:
Option 1: Force Direct Filesystem Access
Modify your rsync command to explicitly use SSH transport:
rsync -e ssh -avz /local/path/ username@host:/remote/path/
Or more explicitly:
rsync --rsh='ssh' -avz /local/path/ username@host:/remote/path/
Option 2: Configure Rsyncd Properly
If you want to use rsyncd, create/edit /etc/rsyncd.conf
:
[backup]
path = /path/to/backup
read only = no
list = yes
uid = root
gid = root
Option 3: Use Absolute Paths with Triple Slash
A little-known syntax that forces filesystem interpretation:
rsync -avz /local/path/ username@host:/absolute//path/
After implementing Option 1, I tested with:
rsync -e ssh -av test.txt remote_backup:/absolute/path/to/backups/
This successfully transferred the file without module errors.
To avoid similar issues:
- Create a wrapper script that explicitly specifies SSH transport
- Document your rsync configuration in version control
- Test backup procedures after any system updates