When running large-scale network operations like web crawlers that resolve 100k+ domains, traditional UDP-based DNS queries can trigger false positive DDoS detections. Most Linux distributions default to UDP for DNS due to its lower overhead, but this becomes problematic when:
- Query volume exceeds typical user thresholds
- ISP traffic monitoring systems lack proper DNS protocol whitelisting
- GLIBC's resolver doesn't automatically fall back to TCP
The most reliable approach modifies the system's resolver configuration to prefer TCP. Create or edit /etc/resolv.conf
:
options use-vc options single-request nameserver 8.8.8.8 nameserver 1.1.1.1
Key directives:
- use-vc: Forces TCP (virtual circuit) mode
- single-request: Avoids parallel UDP queries that might trigger rate limits
For more advanced control, consider dnscrypt-proxy:
listen_addresses = ['127.0.0.1:53'] force_tcp = true cache = true cache_min_ttl = 600
Test with dig +tcp
or inspect traffic:
sudo tcpdump -i any -nn 'port 53 and tcp'
For application-level verification:
import socket print(socket.getaddrinfo('example.com', 80, proto=socket.IPPROTO_TCP))
While TCP adds ~30ms overhead per lookup, these optimizations help:
# In /etc/sysctl.conf net.ipv4.tcp_slow_start_after_idle = 0 net.ipv4.tcp_fastopen = 3
Implement DNS caching with dnsmasq when possible:
cache-size=10000 min-cache-ttl=300
For crawlers resolving >1M domains/day:
# Kubernetes DNSConfig example dnsConfig: options: - name: use-vc - name: single-request - name: timeout value: "5"
Consider anycast DNS services like Cloudflare or AWS Route53 that support TCP-based queries by default.
During large-scale domain crawling operations (like processing 100,000+ domains), DNS queries typically default to UDP transport. Many ISPs implement UDP rate-limiting or outright blocking when detecting high-volume UDP DNS traffic, often mistaking it for DDoS activity. The challenge is implementing a solution that:
- Forces TCP fallback without application changes
- Operates at the GLIBC level
- Maintains system-wide compatibility
The key lies in modifying /etc/resolv.conf
options that control the GLIBC resolver behavior:
# Force TCP for all DNS queries options use-vc options single-request options timeout:1 options attempts:1
The critical directive is use-vc
(use virtual circuit), which forces TCP connections for DNS queries. The additional options optimize performance for TCP-based lookups.
For permanent configuration, modify the NetworkManager or dhclient configuration:
# For NetworkManager (/etc/NetworkManager/conf.d/dns.conf) [main] dns=default rc-manager=resolvconf [connection] connection.mdns=0 ipv4.dns-options=use-vc,single-request ipv6.dns-options=use-vc,single-request
Use tcpdump to confirm TCP usage:
sudo tcpdump -nn -i any 'port 53 and tcp'
For programmatic verification:
#include <netdb.h> #include <stdio.h> int main() { struct addrinfo hints = {0}, *res; hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; // Doesn't force TCP DNS but shows resolver behavior getaddrinfo("example.com", NULL, &hints, &res); // Examine res->ai_socktype }
TCP DNS has higher overhead. Consider these optimizations:
# /etc/sysctl.conf adjustments net.core.somaxconn = 1024 net.ipv4.tcp_max_syn_backlog = 2048 net.ipv4.tcp_fastopen = 3 net.ipv4.tcp_tw_reuse = 1
For advanced cases, consider a local DNS proxy:
# Using dns-over-https docker run -d --name doh-proxy \ -p 53:53/udp -p 53:53/tcp \ satishweb/doh-proxy \ -listen 0.0.0.0:53 \ -upstream https://dns.google/dns-query