Understanding and Troubleshooting Excessive Page Faults/sec in Windows Server 2008


2 views

Page faults occur when a process attempts to access a page that is mapped in its virtual address space but not currently loaded in physical memory. While some page faults are normal (soft faults), excessive hard faults (requiring disk access) can severely impact performance.

In Windows Server 2008, there's no universal threshold for "too many" page faults. However, consider these guidelines:

  • Occasional spikes to 1,000+ faults/sec might be normal during application startup
  • Sustained rates above 500 faults/sec often indicate memory pressure
  • Combine with Memory\Pages/sec counter for better insight

Instead of just monitoring Page Faults/sec, use these more comprehensive approaches:

# PowerShell script to monitor memory metrics
Get-Counter '\Memory\Page Faults/sec', '\Memory\Pages/sec', '\Process(*)\Page Faults/sec' -SampleInterval 2 -MaxSamples 30 |
    ForEach-Object {
        $_.CounterSamples | Where-Object { $_.CookedValue -gt 500 } |
        Format-Table -Property Path, CookedValue -AutoSize
    }

These scenarios warrant investigation:

  • Simultaneous high Page Faults/sec and Pages/sec
  • Disk queue length increases during high fault periods
  • Application response times correlate with fault spikes

Try these actions when facing excessive page faults:

  1. Identify culprit processes using Process Explorer
  2. Check for memory leaks with Performance Monitor
  3. Consider adding physical RAM if working set sizes consistently exceed available memory

For deeper analysis, use Windows Performance Recorder:

wpr -start GeneralProfile -start Memory -filemode
# Reproduce the issue
wpr -stop MemoryAnalysis.etl

Analyze the trace in Windows Performance Analyzer, focusing on memory graphs and process activity.


When monitoring Windows Server 2008 performance counters, Page Faults/sec represents the rate at which processes request pages that aren't in their working sets. There are two types to consider:

  • Soft page faults: Occurs when the required page is found elsewhere in memory (RAM)
  • Hard page faults: Requires disk access to retrieve the page from the page file

While there's no universal threshold for "excessive" page faults, consider these guidelines:

// Example PowerShell snippet to monitor page faults
Get-Counter -Counter "\Memory\Page Faults/sec" -SampleInterval 2 -MaxSamples 5 |
    Select-Object -ExpandProperty CounterSamples |
    Format-Table -AutoSize

Sustained values above these ranges may indicate memory pressure:

  • 100-1,000/sec: Normal for busy systems
  • 1,000-5,000/sec: Potentially concerning, warrant investigation
  • 5,000+/sec: Likely indicates serious memory constraints

Rather than focusing on absolute numbers, examine these ratios:

// Calculate page fault ratio (C# example)
float faultRatio = (pageFaultsPerSec / (float)totalMemoryAccesses) * 100;
if (faultRatio > 10) 
{
    // Potential memory bottleneck detected
}

For comprehensive analysis, combine these counters:

  1. \Memory\Pages/sec (hard page faults)
  2. \Process(_Total)\Page Faults/sec
  3. \Memory\Available MBytes
  4. \Memory\Cache Faults/sec

When facing sustained high page faults:

:: Windows batch script to log page fault trends
@echo off
set LOGFILE=PageFaultLog_%DATE:~-4%-%DATE:~4,2%-%DATE:~7,2%.csv
echo Timestamp,PageFaultsPerSec,AvailableMB >> %LOGFILE%

:loop
for /f "tokens=2 delims=," %%a in ('typeperf "\Memory\Page Faults/sec" -sc 1 ^| find ":"') do (
    set PAGEFAULTS=%%a
)
for /f "tokens=2 delims=," %%b in ('typeperf "\Memory\Available MBytes" -sc 1 ^| find ":"') do (
    set AVAILMEM=%%b
)
echo %TIME%,%PAGEFAULTS%,%AVAILMEM% >> %LOGFILE%
timeout /t 30 >nul
goto loop

Combine this with PerfMon's "Pages Input/sec" counter to differentiate between hard and soft faults.

For applications causing excessive faults:

  • Implement memory caching strategies
  • Optimize working set size with SetProcessWorkingSetSizeEx
  • Review memory allocation patterns (consider using ETW tracing)