Optimizing ZFS on FreeBSD for NAS: Bootability, Drive Expansion, and Heterogeneous Storage Management


2 views

While FreeBSD 7.2 initially required separate boot partitions, modern versions (12+) support full ZFS booting. Here's a current-generation approach:

# Partitioning example for ZFS root
gpart create -s gpt ada0
gpart add -t freebsd-boot -s 512K -l boot0 ada0
gpart add -t freebsd-zfs -l disk0 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0

ZFS's pool expansion capabilities shine in NAS environments. For adding a single drive to an existing mirror:

# Adding a new 4TB drive to existing pool
zpool attach storage mirror /dev/ada1 /dev/ada2
# For replacing a failed drive
zpool replace storage /dev/ada1 /dev/ada2

ZFS handles mixed drive sizes intelligently through allocation algorithms. Practical example creating a pool with 2x4TB and 1x6TB drives:

# Optimal configuration for mixed drives
zpool create -o ashift=12 storage \
  mirror /dev/ada0 /dev/ada1 \
  /dev/ada2
# Verify allocation
zpool list -v

Most operations don't require reboots:

  • Hot-add works for JBOD enclosures with proper HBA
  • Drive replacements can use zpool offline/online
  • Cache/L2ARC devices can be added/removed live

When building versus buying, consider these ZFS-specific advantages:

# Enterprise features available in FreeBSD
zfs set compression=lz4 storage
zfs set atime=off storage
zfs set dedup=on storage # Requires careful RAM planning

Critical parameters for NAS workloads:

# /boot/loader.conf optimizations
vfs.zfs.arc_max="8G" # For 16GB+ systems
vfs.zfs.vdev.cache.size="512M"
vfs.zfs.prefetch_disable=0

Essential cron jobs for production systems:

# Weekly scrub
0 3 * * 0 /sbin/zpool scrub storage
# Daily SMART tests
0 2 * * * /usr/local/sbin/smartctl -t long /dev/ada0

Having deployed multiple ZFS-based storage solutions on FreeBSD, I'll share hands-on insights beyond the official documentation. The current FreeBSD 13.x series offers significantly improved ZFS support compared to earlier versions.

Since FreeBSD 9.0, booting from ZFS has been fully supported. Here's a sample installation command:

zpool create -f -o altroot=/mnt -O compress=lz4 -O atime=off \
    -m none zroot /dev/ada0
zfs create -o mountpoint=/ zroot/ROOT
zfs create -o mountpoint=/tmp -o exec=on -o setuid=off zroot/tmp
zfs create -o mountpoint=/usr zroot/usr
zfs create -o mountpoint=/var zroot/var

Key advantages include snapshot-based boot environments and seamless rollbacks. However, ensure your bootloader supports ZFS (GPT + UEFI recommended).

ZFS offers flexible drive management:

# Adding single disk to existing mirror
zpool attach tank mirror-0 /dev/ada2

# Creating new vdev (RAZ-style expansion)
zpool add tank raidz2 /dev/ada3 /dev/ada4 /dev/ada5

For mixed drive sizes, consider:

  • Allocation classes for SSD caching
  • Separate vdevs with same-sized drives
  • ZFS autotrim for optimal space utilization

Essential sysctl.conf parameters:

vfs.zfs.arc_max="4G"
vfs.zfs.vdev.min_auto_ashift=12
vfs.zfs.prefetch_disable=1
vfs.zfs.txg.timeout=5

For SMB/NFS sharing, add these to rc.conf:

zfs_enable="YES"
samba_enable="YES"
nfs_server_enable="YES"

Create a cron job for regular scrubs:

0 3 * * 0 /sbin/zpool scrub tank

For proactive monitoring, implement ZED (ZFS Event Daemon) with custom scripts for email alerts on pool errors or degraded states.

While TrueNAS Core (FreeBSD-based) provides a polished interface, the raw FreeBSD approach offers:

  • Finer control over ZFS parameters
  • Easier integration with jails/bhyve
  • More flexible update cycles

The choice depends on whether you prioritize convenience or customization.