How to Convert Docker Run Commands to Kubernetes kubectl: Mapping Parameters Like -notebook Flag


4 views

When migrating from Docker to Kubernetes, one common pain point is translating Docker's run command parameters to their Kubernetes equivalents. The example command:

docker run -p 8080:8080 sagemath/sagemath sage -notebook

contains several elements that need proper mapping: port forwarding, image selection, and the tricky -notebook argument passed to the container's entrypoint.

Let's analyze each part:

docker run -p 8080:8080  # Port mapping
sagemath/sagemath        # Image name
sage                     # Entrypoint program
-notebook                # Argument to entrypoint

The initial attempt:

kubectl run --image=sagemath/sagemath sage --port=8080 --type=LoadBalancer -notebook

fails because Kubernetes interprets -notebook as a kubectl flag rather than a container argument. This is a fundamental difference in how Docker and Kubernetes handle command-line parameters.

The most reliable approach is to create a Deployment manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sagemath
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sagemath
  template:
    metadata:
      labels:
        app: sagemath
    spec:
      containers:
      - name: sage
        image: sagemath/sagemath
        command: ["sage"]
        args: ["-notebook"]
        ports:
        - containerPort: 8080

For quick testing, you can use:

kubectl run sagemath --image=sagemath/sagemath --port=8080 --command -- sage -notebook

The --command flag tells Kubernetes that what follows should be passed to the container, not interpreted as kubectl arguments.

To properly expose the notebook interface, create a Service:

apiVersion: v1
kind: Service
metadata:
  name: sagemath-service
spec:
  selector:
    app: sagemath
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: LoadBalancer

After applying these configurations, verify with:

kubectl get pods
kubectl logs [pod-name]
kubectl get svc

You should see the SageMath notebook starting up in the pod logs and get an external IP from the service.

1. Port Conflicts: Ensure no other services use port 8080
2. Entrypoint Overrides: Some images require specific entrypoints
3. Resource Limits: SageMath can be resource-intensive:

resources:
  requests:
    memory: "2Gi"
    cpu: "1"
  limits:
    memory: "4Gi"
    cpu: "2"

When moving from Docker to Kubernetes, one common challenge is translating docker run commands to their kubectl equivalents. The example we're examining involves running SageMath with notebook support:

docker run -p 8080:8080 sagemath/sagemath sage -notebook

The initial translation attempt might look like this:

kubectl run sage --image=sagemath/sagemath --port=8080

However, this misses the crucial -notebook parameter that makes SageMath launch in notebook mode.

In Kubernetes, command arguments need to be specified separately from the main kubectl run command. Here's the correct approach:

kubectl run sage --image=sagemath/sagemath --port=8080 -- sage -notebook

The double dash (--) separates the kubectl parameters from the container command and its arguments.

For production use, you'd typically want to create a proper Deployment manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sagemath
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sagemath
  template:
    metadata:
      labels:
        app: sagemath
    spec:
      containers:
      - name: sagemath
        image: sagemath/sagemath
        args: ["-notebook"]
        ports:
        - containerPort: 8080

To make the notebook accessible, create a Service:

apiVersion: v1
kind: Service
metadata:
  name: sagemath-service
spec:
  selector:
    app: sagemath
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: LoadBalancer

After applying these manifests, verify everything is working:

kubectl get pods
kubectl logs [pod-name]
kubectl get service sagemath-service

This approach properly handles the command-line arguments while following Kubernetes best practices for deployment and service exposure.