Debugging Docker Container Connectivity Issues: Why Can’t Containers Reach sts.nih.gov When Host Can?


5 views

I recently encountered a perplexing networking scenario where a Docker container couldn't connect to sts.nih.gov while the host machine had no issues. Here's my deep dive into troubleshooting this specific connectivity problem that only manifests for certain external IPs.

The core symptom appears when running:

# On host (works)
curl -vv -o /tmp/test https://sts.nih.gov

# In container (fails)
docker run -ti ubuntu:18.04 /bin/bash
curl -vv --ipv4 -o /tmp/test https://sts.nih.gov
* connect to 128.231.243.251 port 443 failed: Connection timed out

DNS Resolution Analysis

First, we verify DNS resolution works in the container:

nslookup sts.nih.gov
Server:     67.207.67.3
Address:    67.207.67.3#53

Non-authoritative answer:
sts.nih.gov canonical name = sts.ha.nih.gov.
Name:   sts.ha.nih.gov
Address: 128.231.243.251
Name:   sts.ha.nih.gov
Address: 2607:f220:404:9124:128:231:243:251

TCP Connectivity Testing

Basic TCP connectivity fails even when bypassing DNS:

# Using direct IP still fails
curl -vv -o /tmp/test https://128.231.243.251

# Raw TCP test also fails
netcat -zvn 128.231.243.251 443
(UNKNOWN) [128.231.243.251] 443 (?) : Connection timed out

Traceroute shows packets reaching the destination network:

traceroute --tcp 128.231.243.251
...
17  128.231.243.251 (128.231.243.251)  77.268 ms  77.215 ms  76.815 ms

The key observation is that this only affects certain destinations. Comparing iptables rules between host and container reveals potential filtering differences:

# On host
iptables -L -n -v

# In container (need privileged)
docker run --privileged -ti ubuntu:18.04 /bin/bash
iptables -L -n -v

1. MTU Configuration

Try adjusting MTU settings for Docker:

# Create or modify /etc/docker/daemon.json
{
  "mtu": 1400
}
systemctl restart docker

2. Network Driver Options

Experiment with different network drivers:

docker network create --driver=bridge --subnet=192.168.100.0/24 custom_net
docker run --network=custom_net -ti ubuntu:18.04 /bin/bash

3. Source IP Verification

The target server might be blocking Docker's NAT IP ranges. Try using host networking:

docker run --network=host -ti ubuntu:18.04 /bin/bash
curl https://sts.nih.gov

4. TCP Stack Tweaks

Adjust TCP parameters in the container:

sysctl -w net.ipv4.tcp_window_scaling=1
sysctl -w net.ipv4.tcp_timestamps=1
sysctl -w net.ipv4.tcp_sack=1

In my case, the root cause was the target server's firewall blocking traffic from Docker's default bridge network IP range. The permanent solution was to:

# Use macvlan driver for dedicated IP
docker network create -d macvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  -o parent=eth0 pub_net

docker run --network=pub_net --ip=192.168.1.42 -ti ubuntu:18.04 /bin/bash

This gave the container a routable IP address that wasn't blocked by the destination server's firewall rules.


I recently encountered a bizarre networking scenario where a Docker container couldn't connect to sts.nih.gov (128.231.243.251) while the host machine could, despite other external connections working perfectly. Here's my deep dive into diagnosis and resolution.

Host machine access works flawlessly:

curl -vv -o /tmp/test https://sts.nih.gov

But fails inside a fresh container:

docker run -ti ubuntu:18.04 /bin/bash
curl -vv --ipv4 -o /tmp/test https://sts.nih.gov
  • DNS resolution works in container
  • IPv6 attempts fail without --ipv4 flag
  • Direct IP access fails too
  • TCP connectivity issues (not HTTPS-specific)
  • Other sites like serverfault.com work fine

Traceroute shows packets reaching the NIH network:

traceroute --tcp 128.231.243.251

But TCP connection attempts time out:

netcat -zvn 128.231.243.251 443

1. Source Address Verification

The NIH network might be implementing strict source address validation. Docker's NAT masquerading can sometimes trigger this.

2. MTU/MSS Issues

Path MTU discovery problems are common in container networking:

docker run --sysctl net.ipv4.ip_no_pmtu_disc=1 -ti ubuntu:18.04 /bin/bash

3. TCP Stack Differences

Try syncing host and container TCP settings:

sysctl -w net.ipv4.tcp_timestamps=1
sysctl -w net.ipv4.tcp_tw_recycle=0

Solution A: Use Host Networking

For testing purposes:

docker run --network host -ti ubuntu:18.04 /bin/bash

Solution B: Adjust iptables Rules

Prevent Docker from mangling packets:

iptables -t nat -A POSTROUTING -j MASQUERADE

Solution C: Bridge Network Tweaks

Create a custom bridge with adjusted parameters:

docker network create --driver bridge \
--subnet 172.25.0.0/16 \
--gateway 172.25.0.1 \
--opt com.docker.network.bridge.name=mybridge \
my-custom-network

In my case, combining several approaches worked:

  1. Created custom Docker network
  2. Disabled PMTU discovery
  3. Added explicit MASQUERADE rule

The key was recognizing that the NIH network had specific anti-spoofing measures that interacted poorly with Docker's default networking configuration.