While nightly Tomcat restarts might temporarily mask memory issues, they're essentially treating symptoms rather than curing the disease. I've managed multi-tenant environments where this approach created more problems than it solved:
// Example of problematic session handling that could cause leaks
public class UserSessionListener implements HttpSessionListener {
private static Map<String, HttpSession> activeSessions = new ConcurrentHashMap<>();
@Override
public void sessionCreated(HttpSessionEvent se) {
// Never-cleared static reference
activeSessions.put(se.getSession().getId(), se.getSession());
}
// Missing proper sessionDestroyed implementation
}
Instead of restarting, use these tools to identify leaks:
- JDK Flight Recorder:
jcmd <PID> JFR.start duration=60s filename=leak.jfr
- MAT (Memory Analyzer Tool): Analyze heap dumps for retained objects
- Tomcat's leak prevention: Enable
Context.xml
anti-resource-leaking parameters
Implement these in your applications:
// Proper resource cleanup example
@WebListener
public class AppContextListener implements ServletContextListener {
private ExecutorService threadPool;
@Override
public void contextInitialized(ServletContextEvent sce) {
threadPool = Executors.newFixedThreadPool(10);
sce.getServletContext().setAttribute("threadPool", threadPool);
}
@Override
public void contextDestroyed(ServletContextEvent sce) {
threadPool.shutdownNow(); // Critical cleanup
}
}
Instead of scheduled restarts, implement proactive monitoring:
#!/bin/bash
# Monitor Tomcat memory usage and restart only when necessary
THRESHOLD=90
CURRENT_USAGE=$(ps -p $(pgrep -f tomcat) -o %mem | tail -1 | cut -d'.' -f1)
if [ "$CURRENT_USAGE" -ge "$THRESHOLD" ]; then
echo "$(date) - High memory usage detected: ${CURRENT_USAGE}%" >> /var/log/tomcat_monitor.log
systemctl restart tomcat
fi
The "restart solution" creates hidden costs:
- Session data loss during business hours
- Increased MTTR during actual failures
- Masking of underlying architecture flaws
- Complicated CI/CD pipeline coordination
The claim that nightly Tomcat restarts constitute a "best practice" often stems from fundamental misunderstandings about Java memory management. Let's examine why this approach is problematic:
// Typical JVM memory arguments for Tomcat
export CATALINA_OPTS="-Xms1024m -Xmx2048m \
-XX:+UseG1GC \
-XX:MaxGCPauseMillis=200 \
-XX:InitiatingHeapOccupancyPercent=35"
Restarting Tomcat servers nightly introduces several operational challenges:
- Cold start performance degradation (JIT optimizations lost)
- Session data destruction requiring client re-authentication
- Connection pool reinitialization overhead
Instead of restarts, implement these monitoring and tuning solutions:
// Sample JMX monitoring configuration for Tomcat
<Listener className="org.apache.catalina.mbeans.JmxRemoteLifecycleListener"
rmiRegistryPortPlatform="10001"
rmiServerPortPlatform="10002"
useLocalPorts="true"/>
For a production Tomcat instance handling 20 applications:
# G1GC tuning for multi-tenant Tomcat
JAVA_OPTS="$JAVA_OPTS -XX:+UseG1GC \
-XX:MaxGCPauseMillis=250 \
-XX:ParallelGCThreads=4 \
-XX:ConcGCThreads=2 \
-XX:InitiatingHeapOccupancyPercent=45 \
-XX:G1ReservePercent=15"
Implement these tools for proactive management:
- Prometheus + Grafana for JVM metrics visualization
- JDK Flight Recorder for low-overhead profiling
- Apache JMeter for sustained load testing
Common memory leak patterns to audit in third-party apps:
// Example leak detection in web.xml
<context-param>
<param-name>trackResources</param-name>
<param-value>true</param-value>
</context-param>
For modern container environments:
# Docker memory limits for Java apps
docker run -d \
-m 2g \
--memory-reservation="1.5g" \
-e JAVA_OPTS="-XX:+UseContainerSupport" \
tomcat:9.0