Combining multiple physical servers to function as a single logical unit is achieved through server clustering technologies. This approach aggregates compute resources (CPU, RAM, storage) to create a unified pool that can host multiple virtual machines with better performance and fault tolerance than individual servers.
Several mature solutions exist for creating such cluster environments:
- VMware vSphere with vMotion and DRS: Allows live migration of VMs across physical hosts
- Microsoft Failover Clustering: Provides high availability for Windows Server environments
- Proxmox VE: Open-source solution with built-in clustering capabilities
- Red Hat Cluster Suite: Enterprise-grade Linux clustering
Here's how to set up a basic 3-node Proxmox cluster:
# On first node (master):
pvecm create mycluster
# On subsequent nodes:
pvecm add IP_OF_MASTER
# Verify cluster status:
pvecm status
Once nodes are clustered, you can create shared storage pools and allocate resources:
# Create a shared ZFS storage pool
zpool create clusterpool mirror /dev/sdb /dev/sdc
# Configure in Proxmox GUI:
Datacenter -> Storage -> Add -> ZFS
When launching new VMs, the cluster automatically handles resource distribution:
qm create 100 --name vm1 --memory 4096 --cores 2 \
--net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-pci \
--scsi0 local-zfs:8,format=qcow2
Key factors affecting clustered VM performance:
- Network latency between nodes (10Gbps+ recommended)
- Storage synchronization overhead
- CPU architecture compatibility
- Memory ballooning configuration
Example of manual VM migration between nodes:
qm migrate 100 node2 --online --with-local-disks
What we're essentially discussing here is creating a unified computing resource pool from multiple physical servers. This concept goes by several technical terms in the infrastructure world:
- Server clustering
- High-availability virtualization hosts
- Distributed resource scheduling
- Compute fabric
Several mature solutions exist for creating such clustered environments:
# Example of using Pacemaker for cluster management
pcs cluster setup --name MY_CLUSTER node1 node2 node3
pcs cluster start --all
pcs property set stonith-enabled=false
pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.1.100 cidr_netmask=24 op monitor interval=30s
For VMware environments, vSphere's High Availability (HA) and Distributed Resource Scheduler (DRS) provide this functionality:
# Sample PowerCLI commands to configure HA
Connect-VIServer -Server vcenter.example.com
Get-Cluster "Production" | Set-Cluster -HAEnabled $true -HAAdmissionControlEnabled $true
Set-Cluster -Cluster "Production" -DRSEnabled $true -DRSAutomationLevel FullyAutomated
The Proxmox VE solution offers similar capabilities:
# Creating a Proxmox cluster
pvecm create CLUSTER_NAME
pvecm add IP_OF_FIRST_NODE
For modern deployments, consider solutions like:
- Nutanix AHV
- Microsoft Storage Spaces Direct
- VMware vSAN
- Ceph with KVM