VMFS5 Datastore Architecture: Optimal LUN Sizing Strategy for VMware Environments


2 views

With VMFS5 removing the 2TB volume limitation, VMware administrators face a fundamental architectural decision when provisioning storage on modern arrays. Let's examine both approaches through the lens of real-world operational requirements.

The single large datastore approach (7TB in this case) offers:

  • Simplified storage management with fewer objects to track
  • Better space utilization without artificial partitioning
  • Reduced SCSI reservation conflicts during metadata operations

Multiple smaller datastores (e.g., 7x1TB) provide:

  • Granular control for performance isolation
  • Reduced impact during storage maintenance
  • Flexibility for different backup/replication policies

For your specific IBM DS3524 configuration with 4x8Gbit FC ports and ESXi hosts using 4Gbit HBAs, consider these metrics:

// Sample storage performance calculation
RAID10_throughput = (number_of_data_disks / 2) * disk_IOPS
                  = (22 / 2) * 150  // Assuming 150 IOPS per 15k RPM disk
                  = 1650 IOPS

Queue_depth = (HBA_count * queue_depth_per_HBA) 
            = (2 * 32)  // Typical 4Gbit FC HBA
            = 64

With 60 VMs across 3 hosts, here's a balanced approach:

# Example PowerCLI for datastore provisioning
$datastoreSizeGB = 1024
$numberOfDatastores = [math]::Ceiling(7200 / $datastoreSizeGB)

1..$numberOfDatastores | ForEach-Object {
    New-Datastore -Name "DS${_}" -Vmfs 
                  -FileSystemVersion 5 
                  -Path $lunCanonicalName 
                  -BlockSizeMB 1
}

Based on your workload profile (mostly light I/O VMs):

  • Create 3-4 datastores of ~2TB each
  • Distribute VMs evenly while keeping related systems together
  • Reserve 10-15% capacity per datastore for snapshots

For storage-aware VM placement:

// Sample vSphere SDK snippet for intelligent placement
StorageResourceManager storageMgr = esxHost.getStorageResourceManager();
StorageDrsPlacementSpec spec = new StorageDrsPlacementSpec();
spec.setVm(vmToPlace);
spec.setPodSelection(storagePod);

StorageDrsPlacementResult result = storageMgr.recommendDatastores(spec);
if (result.getRecommendations().size() > 0) {
    // Apply the top recommendation
    storageMgr.applyStorageDrsRecommendation(
        result.getRecommendations().get(0).getKey());
}

When architecting storage for VMware environments, the VMFS5 datastore sizing strategy significantly impacts performance, manageability, and operational efficiency. Let's examine both approaches through practical lenses.

The single 7TB datastore approach presents these technical attributes:

  • Unified queue depth management (ESXi default queue depth of 32 per LUN)
  • Simplified storage vMotion operations
  • Potential contention during peak I/O periods

For multiple 1TB datastores:

  • Distributed queue depth (32 per LUN × 7 LUNs = 224 total queue slots)
  • Isolated I/O streams for different VM groups
  • Increased SCSI reservation conflicts during metadata operations

Here's how to check current queue depth settings via PowerCLI:

Get-VMHostStorage -VMHost (Get-VMHost) | 
Select-Object -Property Name, 
@{N="MaxQueueDepth";E={$_.ExtensionData.StorageDeviceInfo.ScsiLun[0].QueueDepth}}

For automating datastore creation with esxcli:

for i in {1..7}; do
  esxcli storage filesystem rescan
  partedUtil mklabel /vmfs/devices/disks/naa.60050768018axxxxxx gpt
  partedUtil setptbl /vmfs/devices/disks/naa.60050768018axxxxxx gpt "1 2048 20973567 AA31E02A400F11DB9590000C2911D1B8 0"
  vmkfstools -C vmfs5 -b 1m -S DS_${i} /vmfs/devices/disks/naa.60050768018axxxxxx:1
done

For your specific environment with 60 VMs and 3 hosts:

  • Workload isolation becomes less critical with non-I/O intensive VMs
  • Storage DRS becomes more effective with larger datastores
  • Snapshot consolidation operations benefit from consolidated storage

Given your array's configuration:

  • Controller cache: 2GB per controller (consider 70/30 read/write split)
  • FC connectivity: 4×8Gbps ports per controller (sufficient for 7.2TB capacity)
  • Disk layout: 24×600GB in RAID10 (optimal for VMware workloads)

Recommended multipathing policy for this configuration:

esxcli storage nmp device set --device naa.60050768018axxxxxx --psp VMW_PSP_RR
esxcli storage nmp roundrobin setconfig --device naa.60050768018axxxxxx --type iops --iops 1000

Sample I/O pattern from similar web/app server workloads:

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
naa.xxxx 0.00 3.40 45.20 12.50 0.71 0.06 24.33 0.48 8.36 1.23 7.10

This indicates your workload would benefit more from:

  • Consolidated datastores (reduced management overhead)
  • Larger block sizes (1MB VMFS blocks for web workloads)
  • Minimal need for I/O isolation