Memory Management in ESXi: Understanding Hypervisor Allocation and VM Overcommitment in vSphere 4.1


10 views

In ESXi 4.1, memory management operates at the hypervisor level without requiring explicit allocation to the host itself. The hypervisor uses a sophisticated memory management system that includes:

Physical RAM: 48GB (hardware)
Service Console: ~300MB (dedicated)
VMkernel: Dynamic allocation
VMs: 4GB each (configurable)

ESXi employs several advanced techniques to enable safe memory overcommitment:

  • Transparent Page Sharing (TPS) - eliminates duplicate memory pages
  • Balloon Driver - reclaims idle VM memory via vmware-tools
  • Memory Compression - compresses least-used pages
  • Host Swap - last resort using .vswp files

For your 48GB host running 13x4GB VMs (52GB total):

# esxcli system memory get
Maximum: 49152 MB
Reserved: 3072 MB
Free: 12288 MB

# Check active memory usage
esxtop -b | grep -i "mem"

Critical metrics to monitor in your scenario:

MEMCTL: Balloon driver activity
PSHARE: Page sharing efficiency
SWAP: Host swap usage
ZIP: Memory compression ratio
  1. Enable memory reservation for critical VMs: vim-cmd vmsvc/get.config [vmid] | grep memReservation
  2. Monitor active memory (not just allocated): esxcli system vm process list
  3. Adjust VM memory limits based on actual usage: vim-cmd vmsvc/reconfigure [vmid] mem=[newsize]

Essential commands for memory analysis:

# Check current memory stats
esxcfg-info -m

# Verify balloon driver operation
vmware-toolbox-cmd stat balloon

# View memory compression stats
vsish -e get /memory/comptel

In VMware ESXi 4.1, memory management operates fundamentally differently from traditional operating systems. The hypervisor itself consumes approximately 300-500MB RAM for its core operations, automatically reserved during boot without explicit configuration.


// Sample ESXi memory statistics via CLI
~ # esxcli hardware memory get
Physical Memory:    49152 MB
Non-NUMA Memory:    49152 MB
Reserved:            512 MB
Available:         48640 MB

When you assign 4GB to each VM (13×4GB=52GB on 48GB hardware), you're using ESXi's memory overcommitment capability. Key techniques enable this:

  • Transparent Page Sharing (TPS): Eliminates duplicate memory pages across VMs
  • Balloon Driver (vmmemctl): vmware-tools component reclaims unused guest memory
  • Memory Compression: Compresses less-active pages (introduced in later versions)
  • Swap to SSD: Uses VMFS swap files when physical RAM is exhausted

Use these CLI commands to verify real consumption:


~ # esxtop -b -n 1 | grep -E "MEM|%MEM"
MEM|  4103MB| 11.1%|  22.9%| 100%| 0.00| 0.00| 0.00| 0.00
~ # vim-cmd vmsvc/getallvms | awk '{print $1}' | xargs -I {} vim-cmd vmsvc/get.config {} | grep -i mem
memorySize = "4096"
...

For your 48GB host running 13×4GB VMs:

  1. Enable memory reservations for critical VMs: vmware-cmd /vmfs/volumes/datastore1/VM/VM.vmx setres.mem 2048
  2. Configure shares appropriately: vim-cmd vmsvc/get.config | grep -A 3 memoryAllocation
  3. Monitor swap usage: esxcli storage vmfs swap list

Sample memory metrics interpretation:


MEMORY
--------
PSHARE/MB: 1204    (savings from page sharing)
SWAP/MB:   512     (indicates memory pressure)
ZIP/MB:    256     (compressed memory)
VMKMEM/MB: 38      (hypervisor overhead)

When swap usage exceeds 10% of allocated memory, consider redistributing VM workloads or adding physical RAM.