When choosing between West Coast and East Coast hosting from New York, we must consider the speed of light in fiber optics (approximately 200,000 km/s). The straight-line distance between NYC and:
- East Coast DC (e.g., Ashburn, VA): ~350km → ~1.75ms one-way
- West Coast DC (e.g., San Jose, CA): ~4,100km → ~20.5ms one-way
Actual latency is higher due to:
// Sample traceroute results (ms)
NYC → Ashburn:
Min: 8.2 Avg: 10.5 Max: 15.1
NYC → San Jose:
Min: 48.7 Avg: 52.3 Max: 68.9
The TCP handshake demonstrates the latency multiplier effect:
// East Coast (10ms RTT)
SYN → (10ms) → SYN-ACK → (10ms) → ACK = 20ms total
// West Coast (52ms RTT)
SYN → (52ms) → SYN-ACK → (52ms) → ACK = 104ms total
For a simple query with 10 round trips:
East Coast: 10 * 10ms = 100ms
West Coast: 10 * 52ms = 520ms (5.2x slower)
Modern protocols help but don't eliminate geography:
// HTTP/2 multiplexing helps but...
const latencyImpact = (baseRTT * requiredRoundTrips) / multiplexFactor;
// Still shows 2-3x difference in real-world tests
For static content, consider this Cloudflare Workers example:
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
// Serve from nearest edge location
const asset = await fetch(request)
asset.headers.set('x-edge-location', request.cf.colo)
return asset
}
Based on Amazon's findings (100ms delay = 1% sales drop):
const revenueImpact = (baseRevenue * 0.01) * (additionalLatency / 100);
// 420ms extra latency → ~4.2% potential revenue impact
When choosing between East Coast and West Coast hosting for your NYC-based application, latency primarily depends on the speed of light in fiber optics (~200,000 km/s) and network hop count. The straight-line distance from NYC to:
- East Coast DC (e.g., Ashburn, VA): ~225 miles → ~2ms one-way latency
- West Coast DC (e.g., San Jose, CA): ~2,900 miles → ~25ms one-way latency
Here's how to measure actual round-trip times using common tools:
# Ping test to East Coast DC
ping -c 10 eastcoast.example.com
--- eastcoast.example.com ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9014ms
rtt min/avg/max/mdev = 9.123/11.456/14.789/1.234 ms
# Ping test to West Coast DC
ping -c 10 westcoast.example.com
--- westcoast.example.com ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9014ms
rtt min/avg/max/mdev = 48.456/52.123/55.789/2.345 ms
The latency difference becomes critical for database-intensive applications. Consider this MySQL benchmark with 100 sequential queries:
# East Coast (10ms latency)
Total execution time: 1.2 seconds
# West Coast (50ms latency)
Total execution time: 5.8 seconds
# Optimized batch query (both locations)
SELECT * FROM users WHERE id IN (1,2,3...100)
Execution time: 0.8s (East), 1.1s (West)
The West Coast location will experience:
- Longer TCP handshake (3-way): ~150ms vs 30ms
- Slower TLS negotiation (2 RTTs): ~100ms vs 20ms
- Reduced TCP window scaling effectiveness
If you must use West Coast hosting, implement these mitigations:
// Node.js connection pooling example
const pool = mysql.createPool({
connectionLimit: 50,
host: 'westcoast.example.com',
connectTimeout: 30000,
acquireTimeout: 30000
});
// HTTP/2 Server Push configuration (Nginx)
http2_push_preload on;
location /critical.css {
http2_push;
}
While East Coast hosting might cost 15-20% more for equivalent specs, the latency savings often justify the premium for latency-sensitive applications. For batch processing or asynchronous workloads, the West Coast option may be more cost-effective.
Consider placing your database in East Coast while hosting static assets on West Coast:
# CDN configuration example (AWS CloudFront)
resource "aws_cloudfront_distribution" "static_assets" {
origin {
domain_name = "westcoast-s3-bucket.s3.amazonaws.com"
origin_id = "west-coast-origin"
}
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "west-coast-origin"
}
}