After migrating from IIS 7.5 to IIS 8, we observed a peculiar pattern where application pools would maintain sustained CPU load (in exact 12.5% increments matching our 8-core servers) despite zero active requests. The behavior persisted through website restarts and only resolved with app pool recycling.
First, verify this isn't application code related:
1. Run Process Explorer to check thread stacks during high CPU
2. Enable Failed Request Tracing with this configuration:
<tracing>
<traceFailedRequests>
<add path="*">
<traceAreas>
<add provider="ASP" verbosity="Verbose" />
<add provider="ISAPI Extension" verbosity="Verbose" />
</traceAreas>
<failureDefinitions statusCodes="200-999" />
</add>
</traceFailedRequests>
</tracing>
3. Check for runaway threads with:
<applicationPools>
<add name="MyAppPool">
<cpu smpAffinitized="true" smpProcessorAffinityMask="255" />
</add>
</applicationPools>
Key IIS 8 settings that impact idle behavior:
- Idle Timeout (set to 0 by default in many migrations)
- Processor Affinity settings
- Dynamic CPU throttling
Try this PowerShell snippet to detect misconfigurations:
Import-Module WebAdministration
Get-ChildItem IIS:\AppPools | ForEach-Object {
$pool = $_.Name
$state = (Get-WebAppPoolState -Name $pool).Value
$cpu = Get-ItemProperty "IIS:\AppPools\$pool" -Name processModel.idleTimeout, cpu.limit, cpu.action
[PSCustomObject]@{
AppPool = $pool
State = $state
IdleTimeout = $cpu.idleTimeout
CPULimit = $cpu.limit
CPUAction = $cpu.action
}
}
Rackspace cloud servers often use NUMA architecture, which can cause issues with IIS 8's CPU accounting. Add this to applicationHost.config:
<system.webServer>
<serverRuntime frequentHitThreshold="1" frequentHitTimePeriod="00:00:03" />
<applicationPools>
<applicationPoolDefaults>
<cpu numaNodeAssignment="MostAvailableMemory" />
</applicationPoolDefaults>
</applicationPools>
</system.webServer>
After extensive testing, we found the solution combination:
- Disable overlapped recycling in affected app pools
- Set explicit idle timeout (20 minutes)
- Configure NUMA-aware processor affinity
- Add periodic health monitoring with:
<healthMonitoring>
<rules>
<add name="CPU Monitoring"
eventName="RequestProcessingError"
provider="EventLogProvider"
profile="CPU"
minInterval="00:00:05" />
</rules>
<profiles>
<add name="CPU" minInstances="1" maxLimit="50" minInterval="00:00:05" />
</profiles>
</healthMonitoring>
After migrating from IIS 7.5 to IIS 8 on Windows Server 2012, we're observing a peculiar pattern where w3wp processes maintain CPU usage at exact multiples of 12.5% (1/8th of total CPU capacity) even during zero-request periods. This persists until app pool recycling occurs, suggesting a thread affinity or processor scheduling issue rather than application code problems.
// Sample PowerShell to monitor thread distribution
Get-WmiObject Win32_Thread |
Where-Object {$_.ProcessHandle -eq (Get-Process w3wp).Id} |
Group-Object Priority, ExecutionState |
Select-Object Count, Name | Sort-Object Count -Descending
The output revealed threads stuck in "Running" state distributed evenly across all 8 logical processors. This explains the 12.5% increments (100%/8 cores).
After testing several IIS 8-specific settings, these modifications showed positive results:
appcmd.exe set config -section:system.applicationHost/applicationPools /[name='YourAppPool'].processModel.idleTimeout:00:00:00 /commit:apphost
appcmd.exe set apppool "YourAppPool" -processModel.maxProcesses:1
For immediate relief, we implemented processor affinity control through PowerShell:
$Process = Get-Process -Name w3wp
$Process.ProcessorAffinity = 0x55 # Alternating cores pattern
- IIS 8's enhanced CPU throttling interacting poorly with NUMA architecture
- Changed behavior in thread pool management compared to IIS 7.5
- Potential race conditions during idle-mode transitions
We created a custom PowerShell monitoring script that:
# Sample monitoring logic
$threshold = 12.4
while($true) {
$cpu = (Get-Counter '\Process(w3wp*)\% Processor Time').CounterSamples.CookedValue
if($cpu -ge $threshold -and $cpu % 12.5 -lt 0.1) {
Restart-WebAppPool -Name "AffectedPool"
}
Start-Sleep -Seconds 30
}
Additionally, we modified the applicationHost.config to include:
<applicationPools>
<add name="YourAppPool" managedRuntimeVersion="v4.0">
<recycling logEventOnRecycle="Time, Requests">
<periodicRestart time="12:00:00" />
</recycling>
<processModel idleTimeout="00:00:00" />
</add>
</applicationPools>