When you see "ip_conntrack: table full, dropping packet" in your kernel logs, it means your system's connection tracking table has hit its limit. This commonly occurs on servers handling numerous concurrent connections, particularly web servers or NAT gateways. The default value of 65536 (64K) may be insufficient for modern workloads.
Each connection tracking entry consumes approximately 300-400 bytes of memory. For a 4GB system:
- 64K entries: ~25MB memory
- 128K entries: ~50MB memory
- 256K entries: ~100MB memory
Given your 4GB RAM, increasing to 128K or even 256K would be safe while leaving ample memory for caching static content. Monitor with:
# watch -n1 "cat /proc/slabinfo | grep nf_conntrack"
The two paths represent historical versions:
/proc/sys/net/ipv4/netfilter/ip_conntrack_max # Older kernels /proc/sys/net/ipv4/ip_conntrack_max # Newer kernels
On modern systems (including RHEL5), use the netfilter path. Check which exists in your system.
To temporarily increase (until reboot):
# echo 131072 > /proc/sys/net/ipv4/netfilter/ip_conntrack_max
For permanent change (RHEL5):
# Add to /etc/sysctl.conf net.ipv4.netfilter.ip_conntrack_max = 131072 # Then apply: sysctl -p
Consider adjusting timeouts for better table turnover:
# Reduce established connection timeout (default 5 days) net.netfilter.nf_conntrack_tcp_timeout_established = 86400 # 1 day # For idle connections net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120 net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
Create a monitoring script at /usr/local/bin/conntrack_monitor:
#!/bin/bash MAX=$(cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max) USED=$(cat /proc/net/ip_conntrack | wc -l) PERCENT=$((100*USED/MAX)) echo "Connections: $USED/$MAX ($PERCENT%)" if [ $PERCENT -gt 80 ]; then logger -t conntrack "Warning: Connection table at ${PERCENT}% capacity" fi
Make executable and add to cron:
# chmod +x /usr/local/bin/conntrack_monitor # */5 * * * * /usr/local/bin/conntrack_monitor
When you see the kernel message ip_conntrack: table full, dropping packet
in your logs, it means your system's connection tracking table has reached its maximum capacity. This is particularly common on servers handling high volumes of network traffic, such as static content servers.
The default value for ip_conntrack_max
in RHEL5 is 65536. On a system with 4GB RAM serving as both a connection tracker and static content server, this default may be insufficient while still needing to preserve memory for disk caching.
For a 4GB system, you can safely increase ip_conntrack_max
to 262144 (256K) entries. Each connection typically consumes about 300 bytes of memory, so this would use approximately:
256,000 entries × 300 bytes = ~76.8MB
This represents only about 1.9% of your total 4GB RAM, leaving ample memory for caching.
There are two locations where this setting can be configured:
/proc/sys/net/ipv4/netfilter/ip_conntrack_max /proc/sys/net/ipv4/ip_conntrack_max
The correct location depends on your kernel version. For modern kernels (2.6+), use the netfilter path. To make the change persistent across reboots, add to /etc/sysctl.conf
:
net.ipv4.netfilter.ip_conntrack_max = 262144
Then apply with:
sysctl -p
To verify your current connection count and confirm the new limit is working:
cat /proc/sys/net/ipv4/netfilter/ip_conntrack_count cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max
Consider adjusting the connection tracking timeout values to help manage table usage:
sysctl -w net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 3600 sysctl -w net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 30
These values (in seconds) help ensure connections don't linger in the table unnecessarily.
If you're primarily serving static content, consider disabling connection tracking entirely for outgoing connections:
iptables -t raw -A PREROUTING -p tcp -m multiport ! --dports 80,443 -j NOTRACK iptables -t raw -A OUTPUT -p tcp -m multiport ! --sports 80,443 -j NOTRACK
This preserves tracking for your web services while reducing tracking overhead for other traffic.