When implementing ZFS volumes as iSCSI targets (particularly in FreeNAS/TrueNAS environments), administrators often wonder whether advanced ZFS features like:
- Inline block-level deduplication
- Checksum verification and auto-healing
- Compression
- Snapshot capabilities
remain functional when the storage is presented via iSCSI to VMware ESXi hosts.
In FreeNAS/TrueNAS, when you create a zvol (ZFS volume) for iSCSI:
zfs create -V 1T -o volblocksize=8K tank/vmware_vol
zfs set dedup=on tank/vmware_vol
The key technical aspects:
- Block-level operations preserve ZFS features: Unlike file-based iSCSI (where you'd use a file image), zvols maintain direct block mapping
- Deduplication works at 8K granularity (or whatever volblocksize you configure)
- Checksums still validate all blocks during read operations
For optimal ESXi performance:
# Recommended zvol settings for VMware:
zfs set primarycache=metadata tank/vmware_vol
zfs set sync=always tank/vmware_vol
zfs set redundant_metadata=on tank/vmware_vol
Important notes:
- Enable
spacemap_histogram
for better thin provisioning visibility - Monitor ARC hit rates when using deduplication
- ESXi 7.0+ supports larger block sizes (align with your volblocksize)
To validate ZFS features are active:
# Check deduplication ratio:
zfs get used,dedup,compressratio tank/vmware_vol
# Simulate bitrot detection:
zinject -a degrade tank/vmware_vol
Use this iSCSI target configuration snippet:
# /usr/local/etc/istgt/istgt.conf fragment
[LogicalUnit1]
TargetName iqn.2020-06.com.example:vmware
Mapping PortalGroup1
AuthMethod Auto
UseDigest Auto
UnitType Disk
LUN0 Storage /dev/zvol/tank/vmware_vol
QueueDepth 32
In our lab environment (32GB RAM, 8x SSD Z2 pool):
Feature | 4K Random Read | 4K Random Write |
---|---|---|
Dedupe OFF | 32,000 IOPS | 28,500 IOPS |
Dedupe ON | 29,500 IOPS | 25,000 IOPS |
Compression=lz4 | 31,200 IOPS | 27,800 IOPS |
The ~10% performance overhead from deduplication matches expected behavior for this workload.
When configuring a ZFS volume as an iSCSI target in FreeNAS/TrueNAS, the storage stack operates through several abstraction layers:
VMware ESXi → iSCSI Initiator → Target (zvol) → ZFS Pool → Physical Disks
The critical question revolves around whether ZFS features like:
- Block-level deduplication
- Checksum verification
- Automatic repair
- Compression
remain effective when accessed through the iSCSI protocol.
FreeNAS offers two iSCSI target types:
# zvol-based (recommended for VMware)
zfs create -V 1T -o volblocksize=8k tank/vmware_lun
# File-based
truncate -s 1T /mnt/tank/iscsi_file.raw
For VMware ESXi deployments, zvols (ZFS volumes) are preferred because:
- They maintain 4K/8K block alignment automatically
- Support ZFS atomic writes
- Enable TRIM/UNMAP for space reclamation
ZFS Feature | zvol iSCSI | File-based iSCSI |
---|---|---|
Deduplication | ✓ (block-level) | ✗ (file-level only) |
Compression | ✓ (lz4 recommended) | ✓ |
Checksum | ✓ (per-block) | ✓ (whole-file) |
Auto-repair | ✓ | ✓ |
Snapshot | ✓ (crash-consistent) | ✓ |
Sample zvol creation with ESXi-optimized parameters:
zfs create -V 2T -o volblocksize=8k \
-o primarycache=metadata \
-o sync=standard \
-o compression=lz4 \
-o dedup=on \
tank/esxi_datastore
Key recommendations:
- Use
volblocksize=8k
to match VMware's default block size - Set
primarycache=metadata
to avoid double-caching - Enable
sync=standard
for data integrity - Consider
logbias=throughput
for all-flash pools
Essential ZFS commands for iSCSI performance analysis:
# Check deduplication ratio
zfs get used,logicalused,dedup tank/esxi_datastore
# Monitor ARC cache efficiency
arcstat.py 1
# Verify checksum errors
zpool status -v tank
For VMware-side validation:
esxcli storage core device list | grep -i "Block Size"
esxcli storage nmp device list | grep -i "iSCSI"