When examining how CDNs deliver location-specific IP addresses, the process begins with DNS resolution architecture. The key misunderstanding lies in assuming all DNS queries follow the same resolution path. In reality, authoritative DNS servers can observe the resolver's IP address and make routing decisions accordingly.
// Simplified DNS resolution flow with Geo-awareness
1. Client -> Local DNS Cache (if expired)
2. -> ISP Resolver
3. -> Root Server
4. -> Authoritative NS (your dns1.example.com)
The critical insight is that authoritative nameservers see the recursive resolver's IP (your ISP's DNS server) rather than the end client's IP. CDNs maintain IP-to-location mappings for these resolvers.
Major CDNs use these technical approaches:
- EDNS Client Subnet: Forward partial client IP through resolver
- Resolver Geo-Database: Map resolver IPs to locations
- Anycast Routing: Direct queries to nearest PoP
// Python pseudo-code for geo-aware DNS response
def handle_dns_query(query, resolver_ip):
resolver_location = geoip_db.lookup(resolver_ip)
if resolver_location.continent == 'EU':
return '31.13.92.36' # Frankfurt edge node
elif resolver_location.continent == 'NA':
return '31.13.76.68' # Seattle edge node
else:
return '157.240.2.35' # Default node
Examining Facebook's edge network reveals actual implementation patterns:
Resolved IP | Reverse DNS | Location Hint |
---|---|---|
31.13.92.36 | frt3.facebook.com | Frankfurt |
31.13.76.68 | sea1.facebook.com | Seattle |
31.13.69.228 | iad3.facebook.com | Virginia |
While low TTL values (300s) enable faster geo-routing updates, they increase DNS load. Modern CDNs use:
- Smart TTL reduction during maintenance
- DNS pre-fetching by browsers
- Anycast failover mechanisms
To verify your CDN's geo-routing:
# Using dig with different resolvers
dig @8.8.8.8 example.com
dig @1.1.1.1 example.com
dig @resolver1.opendns.com example.com
When examining how CDNs deliver location-specific IP addresses, we need to understand the hierarchical DNS resolution process with geo-aware modifications:
// Simplified DNS resolution chain with geo-routing
1. Client -> Local DNS (LDNS)
2. LDNS -> Root Server (.org)
3. Root -> Authoritative NS (dns1.example.com)
4. Authoritative NS returns IP based on LDNS location
The critical insight is that the caching occurs at the LDNS level, not the end client. When your ISP's DNS server caches the response, all clients behind that LDNS will get the same IP - but different ISPs in different regions get different cached responses.
Example of geo-routing logic in a DNS server:
function getGeoIP(request) {
const ldnsIP = request.remoteAddress;
const region = geoLookup(ldnsIP);
return region === 'EU' ? '192.0.2.10'
: region === 'NA' ? '203.0.113.5'
: '198.51.100.1'; // Default
}
Major CDNs use these techniques:
- EDNS Client Subnet: Passes partial client IP to authoritative DNS
- LDNS Geo Mapping: Maintains database of ISP DNS locations
- Anycast Routing: Same IP routes to nearest datacenter
Example showing Cloudflare's approach:
// Cloudflare-style edge routing
const edgeNodes = {
'lax': '104.16.0.0/12',
'fra': '2a06:98c0::/29',
'hkg': '2400:cb00::/32'
};
function selectEdgeNode(ldnsIP) {
const prefix = getPrefix(ldnsIP);
return findClosestNode(prefix); // Uses network latency maps
}
To verify geo-routing is working:
# Check from different regions
dig +short @8.8.8.8 example.com
dig +short @1.1.1.1 example.com
dig +short @ns1.orange.fr example.com
# View EDNS client subnet
dig +subnet=0.0.0.0/0 example.com
CDNs use these TTL approaches:
Strategy | TTL Value | Use Case |
---|---|---|
Rapid Failover | 60s | Critical infrastructure |
Regional Balance | 300s | Most CDN configurations |
Global Cache | 86400s | Static global assets |
This explains why you'll sometimes see very low TTLs (even 0) for CDN-hosted domains during infrastructure changes.