How to Mount Kubernetes Volumes with Specific UID for Nexus 3 Pod


2 views

When deploying Sonatype Nexus 3 in Kubernetes, you might encounter permission errors like:

mkdir: cannot create directory '../sonatype-work/nexus3/log': Permission denied
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log

This occurs because the Nexus container runs with UID 200, but the mounted volume doesn't have the correct permissions.

The official Nexus 3 Docker image specifies:

A persistent directory, /nexus-data, needs to be writable by UID 200

Kubernetes doesn't automatically set volume permissions during mounting, leading to these permission conflicts.

Here are three effective approaches to resolve this:

1. Using initContainer to Set Permissions

This is the most reliable method:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nexus
spec:
  template:
    spec:
      initContainers:
      - name: volume-permission-fix
        image: busybox
        command: ["sh", "-c", "chown -R 200:200 /nexus-data"]
        volumeMounts:
        - name: nexus-data
          mountPath: /nexus-data
      containers:
      - name: nexus
        image: sonatype/nexus3
        volumeMounts:
        - name: nexus-data
          mountPath: /nexus-data
      volumes:
      - name: nexus-data
        persistentVolumeClaim:
          claimName: nexus-pvc

2. Using fsGroup in SecurityContext

For certain storage providers (like AWS EBS):

securityContext:
  fsGroup: 200

Note: This doesn't work with all storage backends (NFS typically doesn't support it).

3. Pre-configuring Host Path Permissions

If using hostPath volumes:

mkdir -p /mnt/nexus-data
chown -R 200:200 /mnt/nexus-data

Then in your deployment:

volumes:
- name: nexus-data
  hostPath:
    path: /mnt/nexus-data
    type: Directory

When using dynamic provisioning, some CSI drivers support volume permissions:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nexus-pvc
spec:
  storageClassName: csi-driver-with-permission-support
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  volumeAttributes:
    uid: "200"
    gid: "200"

Check your CSI driver documentation for exact syntax.

  • Verify permissions with kubectl exec nexus -- ls -ld /nexus-data
  • Check storage backend capabilities
  • Consider using securityContext.runAsUser: 200 for consistency

For complete control, you could build a custom image:

FROM sonatype/nexus3
USER root
RUN mkdir -p /nexus-data && \
    chown -R 200:200 /nexus-data
USER nexus

This ensures proper permissions regardless of volume mounting.


When running Nexus 3 in Kubernetes, you'll encounter permission errors because the container runs with UID 200 but the mounted volume doesn't have the correct permissions:

mkdir: cannot create directory '../sonatype-work/nexus3/log': Permission denied
mkdir: cannot create directory '../sonatype-work/nexus3/tmp': Permission denied

The official Nexus Docker image documentation explicitly states that the process runs as UID 200 and requires write access to the persistent volume.

There are several ways to handle this in Kubernetes:

1. Using initContainer to Set Permissions

This approach uses a busybox initContainer to change ownership before the main container starts:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nexus
spec:
  template:
    spec:
      initContainers:
      - name: volume-mount-permission-fix
        image: busybox
        command: ["sh", "-c", "chown -R 200:200 /nexus-data"]
        volumeMounts:
        - name: nexus-data
          mountPath: /nexus-data
      containers:
      - name: nexus
        image: sonatype/nexus3
        volumeMounts:
        - name: nexus-data
          mountPath: /nexus-data
      volumes:
      - name: nexus-data
        persistentVolumeClaim:
          claimName: nexus-pvc

2. Using fsGroup in SecurityContext

For certain storage classes (like AWS EBS), you can use fsGroup in the pod's security context:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nexus
spec:
  template:
    spec:
      securityContext:
        fsGroup: 200
      containers:
      - name: nexus
        image: sonatype/nexus3
        volumeMounts:
        - name: nexus-data
          mountPath: /nexus-data
      volumes:
      - name: nexus-data
        persistentVolumeClaim:
          claimName: nexus-pvc

3. Pre-configuring the Persistent Volume

If you have control over the underlying storage, you can pre-configure the permissions:

# For NFS example:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nexus-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /mnt/nexus
    server: nfs-server.example.com
  mountOptions:
    - nfsvers=4.1
    - noatime
    - nodev
    - noexec
    - nosuid
    - uid=200
    - gid=200

The best solution depends on your infrastructure:

  • For cloud providers: fsGroup approach is simplest
  • For on-prem NFS: mountOptions work well
  • For most general cases: initContainer is most reliable

If you still face issues:

  1. Check storage class supports fsGroup (AWS EBS, GCE PD, Azure Disk do)
  2. Verify your PVC isn't bound to a PV with existing data
  3. Test with emptyDir first to isolate permission issues