When the production server consistently aborts requests at exactly 1 minute 19 seconds (79,000ms) while development environments complete similar requests in 1 minute 44 seconds, we're seeing classic signs of infrastructure-level interference. The Win32 status code 995 (0x3E3) specifically indicates ERROR_OPERATION_ABORTED
- meaning an overlapped I/O operation was cancelled before completion.
In our specific case with 650KB responses:
HTTP Request → IIS Worker Process → Application Code Execution →
Response Buffering → Network Transmission → Firewall Interrupt
The access log tells the story:
200 0 995 0 76933 # 995 = Thread aborted by external factor
Firewalls and load balancers often have default idle connection timeouts. Common culprits:
- Azure Application Gateway: Default 4 minute timeout
- F5 BIG-IP: Typical 300-second timeout
- Windows Firewall: 60-120 second thresholds
1. Network Capture:
netsh trace start scenario=netconnection capture=yes tracefile=C:\temp\nettrace.etl
# Reproduce the issue
netsh trace stop
2. IIS Configuration Check:
<system.webServer>
<asp scriptTimeout="00:05:00" />
<serverRuntime uploadReadAheadSize="49152" />
</system.webServer>
<system.web>
<httpRuntime executionTimeout="300" maxRequestLength="10240" />
</system.web>
Keep-alive Implementation:
public class KeepAliveHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
context.Response.Buffer = false;
context.Response.Write("PING");
context.Response.Flush();
// Main processing here
Thread.Sleep(120000); // Simulate long operation
context.Response.Write("COMPLETE");
}
}
Web.config Adjustments:
<configuration>
<system.webServer>
<webSocket enabled="false" />
<asp enableChunkedEncoding="false" />
</system.webServer>
</configuration>
Factor | Production | Development |
---|---|---|
OS | Windows Server 2008 | Windows 7 |
IIS Version | 7.0 | 7.5 |
Default Timeout | 120s (typical) | No enforced timeout |
To confirm firewall interference:
Test-NetConnection -ComputerName endpoint.com -Port 443 -InformationLevel Detailed
Remember that some network devices log connection resets separately from their main logs. Always check:
- Wireshark/Tcpdump captures
- Proxy server logs
- TCP/IP stack settings (
netsh int tcp show global
)
When our production server consistently aborts HTTP requests at exactly 1 minute 19 seconds (79,000ms) while the same request completes in 1:44 on development machines, we're seeing classic symptoms of infrastructure-level interference. The Win32 status code 995 (0x3E3) translates to ERROR_OPERATION_ABORTED, indicating an external force terminated the connection.
# Sample IIS log entry showing the abort
2023-11-15 14:22:33 192.168.1.100 GET /api/large_report - 80 - 10.0.0.1 Mozilla/5.0 200 0 995 0 76933
Notice the critical indicators:
- sc-status 200 suggests IIS thought the request was successful
- sc-bytes 0 confirms no data actually reached the client
- sc-win32-status 995 reveals the underlying Windows abort
- time-taken 76933ms (76.9s) shows when the abort occurred
Many corporate firewalls (especially Check Point, Palo Alto, or Cisco ASA) enforce default TCP idle timeouts between 60-90 seconds. When dealing with long-running requests that don't frequently flush data:
// ASP.NET example vulnerable to firewall timeouts
public async Task GenerateReport()
{
var data = await GetLargeDataset(); // Takes 60+ seconds
return Json(data); // Single flush at end
}
1. Protocol-Level Keepalives
Force periodic TCP packets to bypass idle detection:
// In Global.asax.cs
protected void Application_BeginRequest()
{
Response.BufferOutput = false;
Response.Write(" "); // Initial flush
Response.Flush();
}
2. IIS Configuration Tweaks
<system.webServer>
<serverRuntime frequentHitThreshold="1"
frequentHitTimePeriod="00:00:30"
frequentHitSampleInterval="00:00:05"/>
</system.webServer>
3. Application-Level Heartbeats
For streaming scenarios:
// Middleware example
app.Use(async (context, next) =>
{
var buffer = new MemoryStream();
var originalBody = context.Response.Body;
context.Response.Body = buffer;
await next();
// Flush every 30 seconds
if(buffer.Length > 0)
{
await originalBody.WriteAsync(buffer.GetBuffer(), 0, (int)buffer.Length);
await originalBody.FlushAsync();
await Task.Delay(30000);
}
});
Use PowerShell to test firewall behavior:
# Simulate long request with periodic data
1..100 | % {
Write-Output "Keepalive $_"
Start-Sleep -Seconds 15
} | Out-File -FilePath \\server\share\test.txt
Monitor with Wireshark for TCP RST packets appearing at the timeout threshold.