When updating Docker containers running web servers like Apache or Nginx, the fundamental issue stems from port binding conflicts. The standard docker stop → docker rm → docker run
sequence creates inevitable downtime because port 80/443 cannot be simultaneously bound by multiple containers.
1. Load Balancer with Blue-Green Deployment
The most robust approach uses a reverse proxy/load balancer:
# Step 1: Deploy new container with temporary port
docker run -d -p 8080:80 --name apache_new apache:latest
# Step 2: Update load balancer config (Nginx example)
upstream backend {
server 172.17.0.2:80; # Old container
server 172.17.0.3:8080; # New container
}
# Step 3: Drain connections from old container
upstream backend {
server 172.17.0.2:80 down;
server 172.17.0.3:8080;
}
# Step 4: Remove old container
docker stop apache_old && docker rm apache_old
2. Docker Swarm/Kubernetes Rolling Updates
For orchestrated environments:
# Docker Swarm example
docker service update \
--image apache:new-version \
--update-parallelism 1 \
--update-delay 10s \
--update-order start-first \
apache_service
Connection Draining with HAProxy
backend apache
balance roundrobin
server old 172.17.0.2:80 check drain
server new 172.17.0.3:80 check
DNS-Based Weighted Routing
For cloud environments, use weighted DNS records (AWS Route 53 example):
{
"Changes": [{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "example.com",
"Type": "A",
"SetIdentifier": "NewContainer",
"Weight": 10,
"AliasTarget": {
"HostedZoneId": "Z2ABCD...",
"DNSName": "new-elb.example.com",
"EvaluateTargetHealth": true
}
}
}]
}
Essential commands for verification:
# Check container health
docker inspect --format '{{.State.Health.Status}}' apache_new
# Verify connection draining
haproxy -c -f /etc/haproxy/haproxy.cfg
# Test zero-downtime (using Siege)
siege -c100 -t1M http://example.com
When maintaining production Docker containers running web servers like Apache or Nginx, administrators face a fundamental architectural constraint: port binding conflicts prevent running multiple instances on standard HTTP ports (80/443). Traditional update methods like docker stop && docker run
inevitably create service gaps.
The blue-green deployment pattern solves this by maintaining two identical environments:
# Blue environment (current production)
docker run -d --name web_blue -p 8080:80 apache_image:v1
# Green environment (new version)
docker run -d --name web_green -p 8081:80 apache_image:v2
Use a lightweight reverse proxy as traffic controller:
# Nginx configuration example
upstream blue {
server 172.17.0.1:8080;
}
upstream green {
server 172.17.0.1:8081;
}
server {
listen 80;
location / {
proxy_pass http://blue; # Active environment
# Health check endpoint
proxy_intercept_errors on;
error_page 502 = @green_backup;
}
location @green_backup {
proxy_pass http://green;
}
}
For containerized environments, consider these enhanced approaches:
- DNS-level switching: Update DNS records with low TTL values
- Service mesh routing: Use Istio or Linkerd for traffic shifting
- Kubernetes rolling updates: Native support through Deployment objects
Automate verification before final cutover:
#!/bin/bash
# Test green environment
response=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:8081/health)
if [ "$response" -eq 200 ]; then
# Update proxy configuration
sed -i 's/proxy_pass http:\/\/blue/proxy_pass http:\/\/green/' /etc/nginx/conf.d/proxy.conf
nginx -s reload
# Optionally terminate old containers
docker stop web_blue
else
echo "Deployment verification failed"
exit 1
fi