In GlusterFS, a brick represents the fundamental storage unit - typically an exported directory on a server. A node (or peer) refers to an entire physical or virtual server that hosts one or more bricks. This distinction is crucial when designing storage architectures.
# Example brick paths on a node:
/srv/gluster/brick1
/srv/gluster/brick2
The confusion arises from how these volume types utilize bricks and nodes:
- Striped Volume: Data is striped across multiple bricks (regardless of nodes)
- Distributed Striped Volume: Data is first distributed across nodes, then striped across bricks within each node
Creating a basic striped volume (4 bricks across 2 nodes):
gluster volume create stripe-vol stripe 2 \\
node1:/srv/gluster/brick1 \\
node1:/srv/gluster/brick2 \\
node2:/srv/gluster/brick3 \\
node2:/srv/gluster/brick4
Creating a distributed striped volume (8 bricks across 4 nodes, 2 stripes per node):
gluster volume create dist-stripe-vol stripe 2 \\
node1:/srv/gluster/brick1 node1:/srv/gluster/brick2 \\
node2:/srv/gluster/brick3 node2:/srv/gluster/brick4 \\
node3:/srv/gluster/brick5 node3:/srv/gluster/brick6 \\
node4:/srv/gluster/brick7 node4:/srv/gluster/brick8
For NFS storage of VM disks:
- Distributed striped volumes provide better performance for concurrent access
- Minimum 4 nodes recommended for production HA environments
- Each brick should be on separate physical storage for redundancy
The documentation can be confusing because:
- Bricks are often mistakenly equated with nodes
- Volume creation syntax doesn't visually distinguish the architectures
- Diagrams sometimes oversimplify the relationships
Always verify your volume structure with:
gluster volume info [volname]
gluster volume status [volname] detail
In GlusterFS terminology, these concepts form the foundation:
- Brick: The basic unit of storage, typically representing an exported directory on a server (e.g., /mnt/brick1)
- Node: A physical or virtual server that hosts one or more bricks
- Peer: Another term for node when referring to cluster membership
The confusion arises from how GlusterFS abstracts storage components. Let's examine a concrete example:
# Creating a simple striped volume (2 bricks across 2 nodes) gluster volume create stripe-vol stripe 2 transport tcp \ node1:/mnt/brick1/stripe-brick node2:/mnt/brick1/stripe-brick # Creating distributed striped volume (4 bricks across 2 nodes) gluster volume create dist-stripe-vol stripe 2 transport tcp \ node1:/mnt/brick1/ds-brick1 node1:/mnt/brick2/ds-brick2 \ node2:/mnt/brick1/ds-brick1 node2:/mnt/brick2/ds-brick2
When dealing with VM storage (VMDK/VHD files), consider these patterns:
Volume Type | I/O Pattern | Recommended Use Case |
---|---|---|
Striped | Parallel writes across bricks | Large sequential I/O (single large VMDK) |
Distributed Striped | Parallel writes across nodes | Multiple concurrent VMs with mixed I/O |
A frequent ESXi integration mistake is brick sizing inconsistency. Verify with:
gluster volume info [volname] | grep -A5 "Bricks" gluster volume status [volname] detail | grep -i size
For optimal NFS performance in VMware environments, always:
- Match brick sizes across nodes
- Use consistent directory structures
- Enable appropriate volume options:
nfs.disable: off
,performance.cache-size: 2GB
Here's a production-ready configuration for 8-node ESXi backend:
gluster volume create vmstore stripe 4 transport tcp \ node{1..4}:/mnt/brick{1..2}/vmstore gluster volume set vmstore nfs.disable off gluster volume set vmstore performance.cache-size 4GB gluster volume set vmstore cluster.lookup-optimize on gluster volume start vmstore