Understanding Linux Virtual Memory: Why VIRT Exceeds Physical + Swap Space


3 views

Many Linux administrators encounter this confusing scenario: a process shows 70GB of virtual memory (VIRT) in top, while the system only has 8GB physical RAM and 35GB swap. This seems to violate the equation VIRT = SWAP + RES from the top manual.

The key misunderstanding lies in how Linux handles virtual address space allocation. When a process calls malloc() or mmap(), the kernel:


// Example of memory mapping that doesn't consume physical resources
void *addr = mmap(NULL, 1024*1024*1024, PROT_READ|PROT_WRITE, 
                 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
if (addr == MAP_FAILED) {
    perror("mmap");
}
// This 1GB allocation appears in VIRT but uses no physical memory yet
  • Sparse arrays: Allocating large memory ranges with mmap
  • Memory-mapped files: Mapping files larger than physical memory
  • JVM/Managed languages: Pre-allocating address space for heaps
  • Memory leaks: Continuous allocation without freeing

To identify the actual memory usage breakdown:


pmap -x [PID]
# Sample output:
Address           Kbytes     RSS   Dirty Mode  Mapping
0000555555554000       4       4       0 r-x-- myprogram
00007ffff7a3d000    1024       0       0 ----- [ anon ]
00007ffff7e3d000 1048576       4       4 rw--- [ anon ]

High VIRT becomes problematic when:

  • The process approaches 64-bit address space limits
  • Memory fragmentation causes allocation failures
  • You see rapid growth in RES (resident memory)

For processes that allocate large virtual ranges:


// Use madvise to optimize memory behavior
madvise(addr, length, MADV_SEQUENTIAL);

// Set memory limits with cgroups
echo "1073741824" > /sys/fs/cgroup/memory/groupname/memory.limit_in_bytes

When examining memory usage through tools like top or htop, many Linux administrators encounter this puzzling scenario:

# top output sample
PID   USER      VIRT   RES    SHR  %MEM  COMMAND
1234  appuser   70g    6g     300m 75%   java

This shows a process consuming 70GB virtual memory (VIRT) while the server only has 8GB physical RAM and 35GB swap. How is this possible?

The key misunderstanding lies in interpreting what VIRT represents. The man top description can be misleading because:

// Actual Linux kernel mm_struct representation
struct mm_struct {
    unsigned long task_size;    /* size of task vm space */
    unsigned long total_vm;     /* Total pages mapped */
    unsigned long locked_vm;    /* Pages that can't swap out */
    unsigned long pinned_vm;    /* Refcount permanently increased */
    unsigned long data_vm;      /* VM_WRITE & ~VM_SHARED & ~VM_STACK */
    unsigned long exec_vm;      /* VM_EXEC & ~VM_WRITE & ~VM_STACK */
    unsigned long stack_vm;     /* VM_STACK */
    // ... 20+ other counters
};

VIRT includes all mapped memory regions, not just resident or swap-backed pages.

RAM+Swap /h2>

Here are technical cases where you'll see this discrepancy:

// Example 1: Memory-mapped large files
int fd = open("huge_file.bin", O_RDONLY);
void *addr = mmap(NULL, 50000000000, PROT_READ, 
                 MAP_PRIVATE | MAP_NORESERVE, fd, 0);

// Example 2: Sparse allocations
void *mem = mmap(NULL, 1UL << 40, PROT_NONE,
                MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0);
madvise(mem, 1UL << 40, MADV_DONTNEED);

To investigate, examine the detailed memory mapping:

# Diagnostic commands
cat /proc/1234/maps
pmap -x 1234
awk '/Size/{total+=$2} END{print total}' /proc/1234/smaps

Look for these indicators in the output:

  • deleted - Memory-mapped files that were removed
  • ---p - Private anonymous mappings
  • Huge gaps between mapping addresses

While alarming at first glance, high VIRT with normal RES typically indicates:

// Typical JVM memory mapping pattern
JavaVMArguments:
    -Xmx64g               // Max heap (virtual reservation)
    -XX:MaxDirectMemorySize=10g  // NIO buffers
    -XX:NativeMemoryTracking=summary

Application frameworks (especially JVM, .NET Core) often reserve large address spaces but only commit what's needed.

Investigate further if you see:

  • RSS approaching physical memory limits
  • Major page faults increasing (ps -o maj_flt)
  • OOM killer activity in dmesg