I recently encountered a frustrating scenario where JavaScript files would only partially load (~12KB out of 50KB) through our Apache reverse proxy setup (mod_jk connecting to Jetty via AJP). The Chrome DevTools showed:
Failed to load resource: net::ERR_CONNECTION_RESET
What made this particularly puzzling was that the issue only manifested when accessing the application through networks with static IP configurations.
After extensive testing across different environments, I confirmed that:
- Dynamic IP networks worked flawlessly
- Static IP networks consistently triggered the connection reset
- The issue occurred regardless of browser choice (Chrome, Firefox, Edge)
- Router security setting changes didn't resolve it
Using Wireshark revealed that the TCP connection was being abruptly terminated by an intermediate network device. This typically indicates:
# Sample tcpdump filter to identify RST packets
tcpdump -i any 'tcp[tcpflags] & (tcp-rst) != 0 and port 80'
Many corporate networks with static IPs implement:
- Deep packet inspection (DPI)
- TCP window size manipulation
- Aggressive connection timeouts
- MTU mismatches
These settings in workers.properties helped stabilize our connections:
# Increased socket timeout for static IP networks
worker.node1.socket_timeout=300000
# Disabled packet buffering
worker.node1.socket_keepalive=true
# Adjusted TCP buffer sizes
worker.node1.socket_send_buffer=65536
worker.node1.socket_recv_buffer=65536
Corresponding Jetty configuration changes:
<Configure id="Server" class="org.eclipse.jetty.server.Server">
<Call name="addConnector">
<Arg>
<New class="org.eclipse.jetty.ajp.Ajp13SocketConnector">
<Set name="host"><Property name="jetty.host" /></Set>
<Set name="port">8009</Set>
<Set name="maxIdleTime">300000</Set>
<Set name="acceptors">2</Set>
<Set name="statsOn">false</Set>
<Set name="lowResourcesConnections">20000</Set>
</New>
</Arg>
</Call>
</Configure>
When infrastructure changes aren't possible:
- Implement chunked transfer encoding
- Add retry logic in your JavaScript loader
- Consider WebSocket fallbacks for critical resources
Example JavaScript retry pattern:
function loadWithRetry(url, maxRetries = 3, delay = 1000) {
return new Promise((resolve, reject) => {
const attempt = (n) => {
fetch(url)
.then(resolve)
.catch(err => {
if (n < maxRetries) {
setTimeout(() => attempt(n + 1), delay);
} else {
reject(err);
}
});
};
attempt(0);
});
}
Essential metrics to track after implementation:
# Apache mod_status metrics to watch
Total Accesses: 12453
Total kBytes: 102394
CPULoad: 0.123456
Uptime: 123456
ReqPerSec: 1.23456
BytesPerSec: 1023.45
BytesPerReq: 823.45
When implementing a reverse proxy setup using Apache's mod_jk to front Jetty applications, I encountered an intermittent issue where JavaScript files would partially load (~12KB out of ~50KB) before failing with ERR_CONNECTION_RESET
. The problem manifested exclusively on networks with static IP configurations.
# Sample mod_jk configuration showing relevant parameters
<IfModule mod_jk.c>
JkWorkersFile conf/workers.properties
JkMount /* loadbalancer
JkShmFile run/mod_jk.shm
JkLogFile logs/mod_jk.log
JkLogLevel debug
JkOptions +ForwardSSLCertChain
# Critical timeout settings
JkOptions +ForwardDirectories
JkRequestLogFormat "%w %V %T %R %{Content-Type}o"
</IfModule>
Static IP networks often implement stricter security policies. Through packet capture analysis, I discovered that some intermediate network devices were aggressively terminating what they perceived as suspiciously long-lived connections (even though the connection duration was normal for file transfers).
Key observations:
- Issue occurs only with static IP networks (both ISP-assigned and corporate)
- Works flawlessly on dynamic IP networks
- Affects both HTTP and HTTPS connections
- Packet loss occurs at random positions in the transfer
The root cause appears to be a combination of factors:
- Mod_jk's default buffering behavior with AJP
- Network-level security devices interpreting large chunks as potential threats
- TCP window scaling issues on static IP networks
Working solution with Apache configuration adjustments:
# In httpd.conf or virtual host configuration
<IfModule mod_jk.c>
# Reduce packet size for problematic networks
JkOptions +BuffSize=8192
# Important keepalive settings
KeepAlive On
KeepAliveTimeout 15
MaxKeepAliveRequests 100
# TCP tuning
SetEnv nokeepalive 0
SetEnv downgrade-1.0 0
</IfModule>
# In workers.properties
worker.loadbalancer.socket_keepalive=true
worker.loadbalancer.socket_timeout=30000
worker.loadbalancer.connection_pool_size=50
worker.loadbalancer.connect_timeout=10000
worker.loadbalancer.prepost_timeout=10000
If the above doesn't resolve the issue, consider these additional measures:
1. Network Device Configuration:
Work with network administrators to whitelist your reverse proxy traffic or adjust IDS/IPS thresholds.
2. Application Layer Changes:
// Example Jetty configuration adjustment
Server server = new Server();
HttpConfiguration httpConfig = new HttpConfiguration();
httpConfig.setOutputBufferSize(8192); // Reduced buffer size
httpConfig.setRequestHeaderSize(8192);
httpConfig.setResponseHeaderSize(8192);
3. Protocol Considerations:
For critical applications, consider implementing WebSocket fallbacks or chunked transfer encoding as alternatives to traditional file transfers.
Implement these verification steps to confirm the solution:
# Curl test command with detailed output
curl -v -H "Cache-Control: no-cache" \
-H "Pragma: no-cache" \
-H "Connection: keep-alive" \
https://yourdomain.com/large-file.js \
--output testfile --trace-time
Key metrics to monitor:
- TCP retransmission rates
- Connection termination patterns
- Packet fragmentation occurrences
- SSL/TLS handshake success rates