When you need to distribute traffic across multiple servers under the same domain, traditional DNS settings won't suffice. Here's how to achieve this with modern infrastructure patterns.
# Example DNS A record configuration for round-robin load balancing
example.com. IN A 192.0.2.1
example.com. IN A 192.0.2.2
example.com. IN A 192.0.2.3
For more advanced routing, consider weighted records:
# Weighted DNS records (supported by providers like AWS Route 53)
www.example.com. IN A 192.0.2.1 (weight: 60)
www.example.com. IN A 192.0.2.2 (weight: 40)
For dynamic routing between shared hosting and dedicated servers:
// Nginx configuration example
server {
listen 80;
server_name example.com;
location /static/ {
proxy_pass http://shared-host-server;
}
location /api/ {
proxy_pass http://dedicated-server;
}
}
For serverless routing solutions:
// Cloudflare Worker script
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
const url = new URL(request.url)
if (url.pathname.startsWith('/app/')) {
return fetch('https://dedicated-server.example.com' + url.pathname, request)
}
return fetch('https://shared-host.example.com' + url.pathname, request)
}
When using multiple backends, consider this synchronization pattern:
# MySQL replication setup
CHANGE MASTER TO
MASTER_HOST='primary-server',
MASTER_USER='replica_user',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=107;
When you need to distribute traffic for a single domain across multiple servers (like shared hosting and a semi-dedicated server), you're essentially dealing with two fundamental approaches:
- DNS-based routing
- Reverse proxy configuration
The simplest method is using DNS round robin, where you assign multiple A records to the same domain:
example.com. IN A 192.0.2.1
example.com. IN A 192.0.2.2
example.com. IN A 192.0.2.3
This will distribute requests randomly among the listed IPs. However, this method lacks health checks and session persistence.
For more control, set up a load balancer using Nginx:
http {
upstream backend {
server shared-server.example.com;
server semi-dedicated.example.com;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
Major cloud providers offer managed solutions:
- AWS: Route 53 weighted routing + ELB
- Google Cloud: Cloud Load Balancing
- Azure: Traffic Manager
Example AWS Route 53 configuration:
{
"Comment": "Weighted routing for multiple servers",
"Changes": [{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "example.com",
"Type": "A",
"SetIdentifier": "Shared Hosting",
"Weight": 70,
"AliasTarget": {
"HostedZoneId": "Z2FDTNDATAQYW2",
"DNSName": "shared-elb-1234567890.us-west-2.elb.amazonaws.com",
"EvaluateTargetHealth": true
}
}
}]
}
When dealing with stateful applications, implement sticky sessions:
upstream backend {
ip_hash;
server shared-server.example.com;
server semi-dedicated.example.com;
}
Or use application-based session tracking with cookies.
Implement health checks to automatically remove failing servers:
upstream backend {
server shared-server.example.com max_fails=3 fail_timeout=30s;
server semi-dedicated.example.com max_fails=3 fail_timeout=30s;
check interval=5000 rise=2 fall=3 timeout=1000;
}
For global traffic, consider geographic DNS routing:
{
"Comment": "Geo routing configuration",
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "example.com",
"Type": "A",
"SetIdentifier": "US-East",
"GeoLocation": {
"CountryCode": "US",
"SubdivisionCode": "VA"
},
"TTL": 60,
"ResourceRecords": [{
"Value": "192.0.2.1"
}]
}
}]
}
Always verify your setup with tools like:
- dig example.com
- nslookup example.com
- curl -v http://example.com
Monitor traffic distribution using server logs or analytics tools.