When deploying web applications across multiple servers (Node1-Node3 in our case), we need shared storage that maintains data consistency. While iSCSI provides block-level storage access, it lacks built-in locking mechanisms at the protocol level.
Mounting the same iSCSI LUN on multiple servers without proper filesystem support leads to:
- Metadata corruption from concurrent writes
- Data corruption when processes overwrite each other
- No atomic operations guarantee
For Ubuntu 16.04/18.04 LTS, these solutions work well:
Option 1: OCFS2 (Oracle Cluster File System)
The simplest option for small clusters:
# On all nodes:
sudo apt-get install ocfs2-tools
sudo mkfs.ocfs2 -N 3 -L "webcluster" /dev/sdX
sudo mkdir /shared
echo "/dev/sdX /shared ocfs2 _netdev 0 0" | sudo tee -a /etc/fstab
Option 2: GFS2 with Pacemaker/Corosync
For larger deployments requiring fencing:
# Install prerequisites
sudo apt-get install pacemaker corosync fence-agents gfs2-utils
# Create clustered LV (example)
sudo lvcreate -L 100G -n webdata vg_shared
sudo mkfs.gfs2 -p lock_dlm -t webcluster:webdata -j 3 /dev/vg_shared/webdata
For applications that can implement their own locking:
# Check reservation support
sudo sg_persist --in --report-capabilities /dev/sdX
# Register initiator
sudo sg_persist --out --register --param-sark=0x1234 /dev/sdX
# Reserve the device
sudo sg_persist --out --reserve --param-rk=0x1234 --prout-type=3 /dev/sdX
Cluster filesystems add overhead:
- OCFS2: 5-15% latency increase for metadata operations
- GFS2: Higher memory usage for lock management
- For NFS-like semantics, consider CephFS or GlusterFS instead
With OCFS2 configured, modify php.ini:
session.save_handler = files
session.save_path = "/shared/php_sessions"
Ensure directory permissions allow web server user access across all nodes.
When configuring multiple Linux servers to access the same iSCSI LUN simultaneously, you're essentially dealing with a shared block device scenario. The fundamental issue is that standard filesystems like ext4 or XFS aren't designed for concurrent access from multiple hosts.
What happens when Server A writes to block 1000 while Server B simultaneously writes to the same block? Without proper coordination, you get:
- Data corruption from overlapping writes
- Metadata inconsistencies
- Complete filesystem breakdown
For Ubuntu 16.04/18.04 environments, these are your viable options:
# Install GFS2 prerequisites
sudo apt-get install -y gfs2-utils fence-agents
# For OCFS2:
sudo apt-get install -y ocfs2-tools
Here's a complete setup example using GFS2 with a 3-node cluster:
# On all nodes:
sudo apt-get install -y corosync pacemaker gfs2-utils
# Configure corosync (all nodes)
cat > /etc/corosync/corosync.conf <
For Oracle Cluster Filesystem 2 (OCFS2), the setup differs:
# Initialize the config on first node
sudo o2cb add-cluster webcluster
sudo o2cb add-node webcluster --ip 192.168.1.101 --number 1
sudo o2cb add-node webcluster --ip 192.168.1.102 --number 2
sudo o2cb add-node webcluster --ip 192.168.1.103 --number 3
sudo o2cb register-cluster webcluster
# Create filesystem
sudo mkfs.ocfs2 -L "shared_web" -N 3 /dev/shared_vg/shared_lv
- GFS2 shows better performance for small files with heavy metadata operations
- OCFS2 generally performs better with large sequential writes
- Both require proper fencing configuration to prevent split-brain scenarios