The most reliable way to verify a bare-metal server is through direct hardware inspection. Here's how to check using Linux commands:
# Check CPU model and cores
lscpu | grep -E "Model name|Core(s) per socket|Socket(s)"
# Check physical memory configuration
sudo dmidecode -t memory | grep -E "Size:|Locator:"
# Verify disk devices (should show physical drives)
lsblk -d -o NAME,ROTA,SIZE,MODEL
# Check for hypervisor presence (should return empty if bare-metal)
systemd-detect-virt
Virtual environments leave detectable traces in the system. This Python script checks multiple indicators:
import os
import subprocess
def check_virtualization():
indicators = {
'dmesg': 'hypervisor',
'lspci': 'virtio',
'lsmod': 'kvm',
'cpuinfo': 'hypervisor'
}
for cmd, pattern in indicators.items():
try:
output = subprocess.check_output(
f'{cmd} | grep -i {pattern}',
shell=True,
stderr=subprocess.DEVNULL
)
if output:
return f"Virtualization detected via {cmd} ({pattern})"
except:
continue
return "No virtualization indicators found"
print(check_virtualization())
Virtualized environments typically share network bandwidth. Run this iPerf3 test between your server and a known physical host:
# On server (run as root):
iperf3 -s
# On client machine:
iperf3 -c server-ip -t 60 -P 10
Compare results with your contract's promised bandwidth. Consistent drops during peak hours may indicate sharing.
Physical disks show different latency patterns than virtualized storage. Use fio for comprehensive testing:
# Sequential read test
fio --name=seqread --ioengine=libaio --rw=read --bs=128k \
--numjobs=1 --size=4G --runtime=60 --time_based \
--direct=1 --group_reporting
Look for consistent latency values throughout the test. Virtualized storage often shows periodic spikes.
For ultimate verification, check kernel parameters that differ between physical and virtual machines:
# Check for paravirtualization drivers
ls /sys/bus/virtio/drivers/
# Examine interrupt requests (physical servers show actual hardware IRQs)
cat /proc/interrupts | grep -v "PCI-MSI\|IO-APIC"
When you SSH into your server, run these diagnostic commands to check for virtualization artifacts:
# Check hypervisor presence (returns empty if bare metal)
sudo dmidecode -s system-manufacturer
# Examine CPU flags for virtualization (look for 'hypervisor')
cat /proc/cpuinfo | grep flags
True dedicated servers will show unique hardware signatures:
# List all PCI devices (virtual servers often show fewer devices)
lspci -tv
# Check disk by-id (virtual disks show different naming patterns)
ls -l /dev/disk/by-id/
Write a simple Python script to measure hardware consistency:
import time
import multiprocessing
def stress_test():
start = time.time()
# CPU intensive operation
[x**2 for x in range(10**7)]
return time.time() - start
if __name__ == '__main__':
with multiprocessing.Pool() as pool:
results = pool.map(stress_test, range(8))
print(f"Execution times: {results}")
variance = max(results) - min(results)
print(f"Time variance: {variance:.4f}s")
# Dedicated servers typically show <0.5s variance
Virtualized environments often share network interfaces:
# Check for virtual network interfaces
ip link show | grep virtual
# Analyze network performance
iperf3 -c speedtest.server -p 5201 -t 20 -P 8
Create a custom kernel module to detect virtualization:
// vdetect.c
#include <linux/module.h>
#include <linux/kernel.h>
static int __init vdetect_init(void) {
unsigned int hypervisor_bit;
asm volatile ("cpuid" : "=d"(hypervisor_bit) : "a"(1));
printk(KERN_INFO "Hypervisor bit: %x\n", hypervisor_bit & (1 << 31));
return 0;
}
static void __exit vdetect_exit(void) {
printk(KERN_INFO "Module unloaded\n");
}
module_init(vdetect_init);
module_exit(vdetect_exit);
MODULE_LICENSE("GPL");
Compile with:
make -C /lib/modules/$(uname -r)/build M=$PWD modules
sudo insmod vdetect.ko
dmesg | tail -n 2
Consider these professional tools for comprehensive analysis:
- Phoronix Test Suite (hardware benchmarking)
- Memtest86+ (memory subsystem verification)
- HD Tune (storage performance analysis)