Direct SAN-to-HBA Fibre Channel Connections: Bypassing FC Switches for VMware HA and vMotion


2 views

When implementing enterprise storage with Fibre Channel (FC), the traditional architecture involves FC switches to create a Storage Area Network (SAN) fabric. However, in smaller deployments like your 3-node VMware cluster with an HP MSA 2040, direct attachment might be technically feasible.

The HP MSA 2040's 8 FC ports support point-to-point connections. Each ESXi host with dual-port HBAs can establish two independent paths to the SAN:

# Typical HBA configuration in ESXi (example)
esxcli storage core adapter list
# Should show your QLogic/Emulex HBA ports

vMotion and HA will function with direct FC connections, but with important considerations:

  • No automatic path failover between servers (unlike switched fabric)
  • Each LUN must be manually mapped to all hosts
  • Zoning must be properly configured on the MSA array

For a 3-host cluster, you'd configure your MSA ports as follows:

# MSA 2040 zoning example for 3 hosts
msa# storage array create zone --name esxi01_zone --port 0 --hba 21:00:00:24:ff:31:6e:50
msa# storage array create zone --name esxi01_zone_alt --port 1 --hba 21:00:00:24:ff:31:6e:51
# Repeat for other hosts

Without FC switches:

  • No fabric-based multipathing (like FCP_AL)
  • Each server has dedicated bandwidth but no cross-server pathing
  • Maintenance becomes more complex during array updates

Before committing to direct connection, consider:

  1. Used enterprise FC switches (Brocade 300/6505)
  2. iSCSI with 10Gbps networking
  3. SAS-attached storage for small clusters

If proceeding with direct FC:

# ESXi host storage configuration checklist
1. Verify HBA driver version
2. Configure persistent binding:
esxcli storage nmp device set --device naa.600508b4000156d70000000000000000 --psp VMW_PSP_FIXED
3. Set SATP rule for MSA:
esxcli storage nmp satp add --satp VMW_SATP_ALUA --device naa.600508b4000156d70000000000000000

While direct FC works for small clusters, scalability limitations emerge when:

  • Adding more than 4-5 hosts
  • Implementing stretched clusters
  • Needing advanced SAN services (like NPIV)

Connecting Fibre Channel HBAs directly to an HP MSA 2040 SAN without switches is technically possible through point-to-point (P2P) FC topology. The MSA 2040's 8 FC ports support both switched fabric (F_Port) and point-to-point (P_Port) modes. When directly connected:


# Example ESXi host SAN configuration (direct connection)
esxcli storage core adapter list | grep "HBA Model"
esxcli storage nmp device list | grep "MSA2040"
esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA -d MSA2040 -o enable_alua

For a 3-node ESXi cluster with direct FC connections:

  • Each host needs dual-port FC HBAs (QLogic or Emulex)
  • MSA 2040 requires proper zoning configuration (even without switches)
  • ALUA (Asymmetric Logical Unit Access) must be enabled

Direct FC connections impact cluster operations differently:


# Sample PowerCLI to verify HA configuration
Get-Cluster "YourCluster" | Select-Object -Property HAEnabled,HAAdmissionControlEnabled
Get-VMHost | Get-VMHostHba -Type FibreChannel | Format-Table Device,Status,Model

Key observations from production deployments:

  • vMotion works but requires identical storage visibility across hosts
  • HA failover may experience longer detection times (adjust das.failureDetectionTime)
  • Storage DRS becomes less effective without shared switch visibility

For smaller budgets, consider this mixed approach:


# Using one FC switch for critical paths + direct connections
# Primary path: FC switch (for HA heartbeat)
# Secondary path: Direct SAN connection (for data)
esxcli storage nmp path set -P VMW_PSP_RR -A VMW_SATP_ALUA -U "naa.600508b4000c4d3d0000000000000000"