SAN vs NAS vs DAS: Storage Architecture Comparison for Developers and System Design


2 views

When designing systems, developers often face the storage architecture dilemma. Let's break down each technology with concrete examples:

The simplest form, physically connected to a single host:

// Typical DAS connection in Linux
fdisk -l /dev/sda  # Lists DAS devices
mkfs.ext4 /dev/sdb1  # Formats DAS storage
mount /dev/sdb1 /mnt/mydrive  # Mounts DAS

File-level storage accessed over IP networks:

# Example mounting NAS in Python
import os
os.system('mount -t nfs 192.168.1.100:/shared /mnt/nas')

# SMB/CIFS access example
os.system('mount -t cifs //nas-server/share /mnt/nas -o username=user,password=pass')

Block-level storage with dedicated high-speed networks:

# iSCSI SAN connection example
iscsiadm -m discovery -t st -p 192.168.1.200
iscsiadm -m node -T iqn.2020-01.com.example:storage -p 192.168.1.200 -l

# Fiber Channel SAN management
systool -c fc_host -v  # Shows FC host bus adapters

Benchmarking different storage types:

// Python benchmark script
import time
import os

def test_write_speed(path):
    start = time.time()
    with open(f"{path}/testfile", "wb") as f:
        f.write(os.urandom(1024*1024*100))  # 100MB
    return time.time() - start

print(f"DAS: {test_write_speed('/mnt/das')}s")
print(f"NAS: {test_write_speed('/mnt/nas')}s")
print(f"SAN: {test_write_speed('/mnt/san')}s")

Consider these factors when implementing storage:

  • Latency-sensitive apps: SAN for database systems
  • Collaboration needs: NAS for file sharing
  • Budget constraints: DAS for single-server setups

Setting up redundant SAN paths:

# Multipath configuration for SAN
multipath -ll  # Lists SAN paths
mpathconf --enable --with_multipathd y

Automating NAS backups with rsync:

#!/bin/bash
rsync -avz --delete /critical/data/ /mnt/nas/backups/

When building data-intensive applications, developers must understand three fundamental storage architectures:

The simplest form where storage is directly connected to a single server:

# Linux example: Checking DAS devices
lsblk
fdisk -l

Common in workstations and small servers. High performance but limited scalability. Think of your laptop's SSD - that's DAS.

A file-level storage architecture accessed over a network:

# Python example: Accessing NAS via NFS
import os
os.listdir('/mnt/nas_share')

Uses protocols like NFS or SMB. Great for shared file access but has network latency. Developers often use NAS for:

  • Team project repositories
  • Shared development environments
  • Backup destinations

Block-level storage network appearing as local storage to servers:

# iSCSI initiator configuration example (Linux)
sudo iscsiadm -m discovery -t st -p 192.168.1.100
sudo iscsiadm -m node -T iqn.2023-01.com.example:storage -p 192.168.1.100 -l

Key characteristics:

  • Uses Fibre Channel or iSCSI
  • Enables features like storage virtualization
  • Essential for high-performance databases

Latency benchmarks (typical values):

Type Latency Throughput
DAS ~100μs Highest
SAN ~1ms High
NAS ~10ms Medium

DAS: Local development environments, performance testing

NAS: Team code repositories, shared assets, CI/CD pipelines

SAN: Production databases, high-availability systems

Modern cloud environments mirror these architectures:

// AWS SDK example: Working with EBS (DAS equivalent)
const AWS = require('aws-sdk');
const ec2 = new AWS.EC2();
ec2.describeVolumes({VolumeIds: ['vol-123456']}, (err, data) => {
  console.log(data);
});

Azure Files = NAS, AWS EBS = DAS, Azure Disk Storage = SAN-like

Here's how you might configure storage for a web app:

# Docker compose fragment showing mixed storage
services:
  db:
    volumes:
      - san-storage:/var/lib/mysql # SAN for database
  app:
    volumes:
      - ./code:/app # DAS for code
      - nas-share:/uploads # NAS for user content