When trying to access SSHFS-mounted directories from within Docker containers, developers commonly encounter two frustrating errors:
# With -v flag
docker: Error response from daemon: error while creating mount source path: mkdir /path/sshfs: file exists.
# With --mount flag
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist
Docker's volume mounting system fundamentally differs from SSHFS operations. The key issues:
- SSHFS creates FUSE-mounted paths that don't behave like regular filesystems
- Docker's bind mounts require traditional POSIX filesystem semantics
- Permission mapping between host and container creates additional complexity
Option 1: Direct SSHFS Mount Inside Container
Install FUSE and SSHFS directly in your container:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y sshfs fuse
RUN chmod 777 /dev/fuse
Then run with necessary privileges:
docker run --device /dev/fuse --cap-add SYS_ADMIN -it myimage bash
# Inside container:
sshfs -o allow_other user@remote:/path /target
Option 2: Host Proxy Mount
Configure the host to expose SSHFS via NFS:
# On host:
sshfs remote:/path /mnt/sshfs
sudo apt-get install nfs-kernel-server
echo "/mnt/sshfs *(rw,sync,no_subtree_check)" >> /etc/exports
exportfs -a
Then mount NFS in container:
docker run --privileged -it myimage bash
apt-get update && apt-get install -y nfs-common
mount -t nfs host-ip:/mnt/sshfs /target
Option 3: SSHFS Docker Volume Plugin
For production environments, consider specialized plugins:
docker plugin install vieux/sshfs
docker volume create -d vieux/sshfs -o sshcmd=user@remote:/path -o password=secret sshvolume
docker run -v sshvolume:/target myimage
- Always check
dmesg
for FUSE-related errors - Use
strace -f sshfs
to debug mounting issues - Consider
sshfs -o debug
for verbose output - For permission issues, experiment with
-o uid=,gid=
parameters
When trying to access SSHFS-mounted directories from Docker containers, developers often encounter confusing errors. The fundamental issue stems from how Docker interacts with FUSE-mounted filesystems like SSHFS.
# Common error patterns
docker: Error response from daemon: error while creating mount source path
docker: Error response from daemon: invalid mount config for type "bind"
SSHFS mounts through FUSE create virtual filesystems that don't behave like physical storage. Docker's bind mount mechanism expects:
- Persistent inode information
- Standard filesystem operations
- Stable device mappings
These requirements aren't fully met by FUSE implementations.
Solution 1: Direct SSHFS Mount Inside Container
The most reliable approach is to mount SSHFS directly within the container:
docker run -it --privileged --cap-add SYS_ADMIN --device /dev/fuse myimage bash
# Inside container:
apt-get update && apt-get install -y sshfs
mkdir -p /target
sshfs -o allow_other user@remote:/remote/path /target
Key requirements:
--privileged
flag or specific capabilities- FUSE installed in container
- SSHFS binaries available
Solution 2: Volume Plugin Approach
For production environments, consider using the SSHFS volume plugin:
docker plugin install --grant-all-permissions vieux/sshfs
docker volume create \
-d vieux/sshfs \
-o sshcmd=user@remote:/remote/path \
-o password=your_password \
sshvolume
docker run -it -v sshvolume:/target myimage bash
Solution 3: Host-Side Proxy Mount
An alternative approach using bind mounts with proper permissions:
# On host:
mkdir -p /host_mount
sshfs user@remote:/remote/path /host_mount
sudo chmod a+rX /host_mount
docker run -it -v /host_mount:/target:ro myimage bash
When implementing SSHFS-Docker integration:
- Prefer SSH key authentication over passwords
- Use read-only mounts where possible (
:ro
flag) - Limit container capabilities to minimum required
- Consider network segmentation for sensitive data
Common issues and fixes:
# Debugging mount failures
docker run -it --rm --cap-add SYS_ADMIN alpine sh -c "apk add strace && strace sshfs"
# Permission denied errors
sshfs -o allow_other,uid=$(id -u),gid=$(id -g) user@remote:/path /mountpoint
# Connection timeouts
sshfs -o ServerAliveInterval=15,ServerAliveCountMax=3 user@remote:/path /mountpoint