How to Configure Multiple IP Fallback for a Single Hostname in /etc/hosts File


2 views

When dealing with high-availability database setups, applications often need to connect to either a primary or secondary database server. The traditional /etc/hosts approach has limitations since it doesn't natively support IP failover mechanisms.

The example configuration:

141.131.286.1   abc.efg.datastore.com   #primary
141.131.286.237 abc.efg.datastore.com   #secondary

won't work as expected because:

  • Most systems only use the first matching entry
  • There's no built-in health checking or failover
  • DNS resolvers typically don't cycle through multiple IPs

Option 1: DNS Round Robin

Configure your DNS server to return both IPs:

abc.efg.datastore.com. IN A 141.131.286.1
abc.efg.datastore.com. IN A 141.131.286.237

Most clients will try the first IP and fall back to the second if connection fails.

Option 2: Use HAProxy

For more control, set up HAProxy as a local forwarder:

frontend db_frontend
    bind 127.0.0.1:3306
    default_backend db_backend

backend db_backend
    server primary 141.131.286.1:3306 check
    server secondary 141.131.286.237:3306 backup

Then point your app to localhost.

Option 3: Custom Resolver Script

Create a wrapper script that implements your failover logic:

#!/bin/bash
PRIMARY="141.131.286.1"
SECONDARY="141.131.286.237"

if nc -z -w 2 $PRIMARY 3306; then
    echo $PRIMARY
else
    echo $SECONDARY
fi

Use these commands to verify:

# Test DNS resolution
dig +short abc.efg.datastore.com

# Test connectivity
timeout 2 bash -c "</dev/tcp/141.131.286.1/3306" && echo "Primary OK" || echo "Primary DOWN"

Implement monitoring to track failovers:

#!/bin/bash
# Log failed connections to secondary
grep "Connected to secondary" /var/log/app.log | wc -l

The standard /etc/hosts file doesn't support automatic failover between multiple IP addresses assigned to the same hostname. When you list multiple entries like this:

141.131.286.1   abc.efg.datastore.com   #primary
141.131.286.237 abc.efg.datastore.com   #secondary

The system will only use the first entry it finds, completely ignoring subsequent entries for the same hostname. This creates a single point of failure in your architecture.

Option 1: DNS Round Robin with Low TTL

The most robust solution would be to configure DNS round-robin with a short TTL:

datastore.example.com.  300  IN  A  141.131.286.1
datastore.example.com.  300  IN  A  141.131.286.237

With this approach, clients will automatically try the next IP if the first one fails. The 300-second TTL ensures quick failover.

Option 2: Using Local DNS Resolver with Fallback

For systems where you can't modify public DNS, configure a local resolver like dnsmasq:

# /etc/dnsmasq.conf
server=/datastore.example.com/141.131.286.1
server=/datastore.example.com/141.131.286.237

Option 3: Application-Level Retry Logic

Implement connection retries in your application code:

def connect_to_database():
    endpoints = ["141.131.286.1", "141.131.286.237"]
    for endpoint in endpoints:
        try:
            conn = create_connection(endpoint)
            return conn
        except ConnectionError:
            continue
    raise ConnectionError("All database endpoints failed")

The hosts file was designed for simple static mappings, not for high-availability scenarios. Modern systems require more sophisticated approaches:

  • No built-in health checking
  • No automatic failover mechanism
  • Changes require manual intervention
  • No load balancing capabilities

If you're constrained to using /etc/hosts, you can implement a cron job to switch entries:

#!/bin/bash
if ! ping -c 1 141.131.286.1 &> /dev/null; then
    sed -i 's/^141.131.286.1/141.131.286.237/' /etc/hosts
fi

But this is not recommended for production environments.