When working with virtual machines (VMs) under hypervisors like KVM or Xen, time management presents unique challenges. The core question is whether running individual NTP (Network Time Protocol) services in every guest VM is necessary, or if inheriting the host's system time would suffice.
During boot, VMs do inherit the host's system time through the hypervisor. However, this initial synchronization isn't maintained automatically. The virtualized hardware clock in each guest will drift independently due to:
- CPU scheduling variations between host and guests
- Different clock interrupt handling
- Potential VM pauses during live migration
In our testing environment with KVM, we observed time drift patterns:
# Sample drift measurement over 24 hours
Host System: 0.000002s drift
VM1 (no NTP): 3.784s drift
VM2 (no NTP): 2.956s drift
VM3 (with NTP): 0.000005s drift
For most production environments, we recommend:
- Host-level NTP: Configure accurate time on the physical host
- Guest NTP services: Run lightweight NTP clients in guests
- Hypervisor time sources: Enable appropriate time sync features
For KVM environments, consider these qemu-kvm parameters:
-rtc base=utc,clock=host,driftfix=slew
And in the guest VM, use chrony (lightweight NTP client):
# /etc/chrony.conf
pool pool.ntp.org iburst
makestep 1.0 3
For Xen domains, enable periodic time synchronization:
# In domU configuration
extra = "clock=tsc nohpet"
Combine with a minimal NTP client configuration:
# /etc/ntp.conf (minimal)
server host-gateway iburst
restrict default nomodify notrap noquery
Running NTP in each VM has minimal overhead:
- Memory: ~5MB per client
- CPU: <0.1% average load
- Network: 1 packet every 64-1024 seconds
For containerized environments or high-density VM deployments:
- Consider using chrony's shared memory driver
- Evaluate PTP (Precision Time Protocol) for low-latency needs
- Use host time injection for short-lived VMs
When working with virtual machines (VMs) under hypervisors like KVM or Xen, time synchronization presents unique challenges. While it might seem logical for guest VMs to inherit the host's system time, the reality is more nuanced due to the virtualized hardware clock abstraction layer.
The virtualization stack introduces several factors that can cause time drift between host and guests:
- CPU scheduling delays in the hypervisor
- Virtual interrupts may not be perfectly aligned with physical interrupts
- Clock source differences (TSC vs. paravirtualized clocks)
For KVM/QEMU environments:
# Enable both kvm-clock and NTP for best results
virsh edit [vm-name]
<clock offset='utc'>
<timer name='kvmclock' present='yes'/>
</clock>
For Xen domains:
# In domU configuration
extra = "clocksource=tsc xen.independent_wallclock=1"
Running NTP in each VM does consume additional resources, but modern implementations like chrony or systemd-timesyncd have minimal overhead:
# Example chrony minimal configuration (chrony.conf)
pool 2.debian.pool.ntp.org iburst
makestep 1.0 3
driftfile /var/lib/chrony/drift
rtcsync
For high-performance environments where NTP overhead is a concern:
- Use PTP (Precision Time Protocol) with hardware support
- Implement time injection from the hypervisor (KVM's kvm-clock)
- Consider using chrony in broadcast mode for large VM deployments
Key commands to verify time synchronization:
# Check current offset
chronyc tracking
ntpq -p
# Verify clock sources
cat /sys/devices/system/clocksource/clocksource0/current_clocksource