When scripting OpenSSL's s_client
command to retrieve SSL certificates across multiple hosts, network timeouts can significantly slow down your automation. The default TCP connection timeout (typically 60+ seconds) becomes problematic when scanning hosts behind firewalls or with network issues.
While s_client
doesn't expose direct timeout parameters, we can leverage these OpenSSL options:
openssl s_client -connect host:port -brief -no_ssl3 -no_tls1 -no_tls1_1 -tls1_2
The -brief
flag reduces output, while protocol restrictions help fail faster on incompatible hosts.
Here are three practical approaches to implement timeouts:
1. Using timeout Command (Linux/Mac)
timeout 5 openssl s_client -connect example.com:443 2>/dev/null <<< "Q"
This kills the process after 5 seconds. The <<< "Q"
sends quit command immediately after connection.
2. Script Wrapper with SIGALRM
#!/bin/bash
set -e
host=$1
port=${2:-443}
timeout=${3:-5}
# Setup alarm
handler() { echo "Timeout!"; exit 124; }
trap handler ALRM
# Execute with timeout
( sleep $timeout && kill -ALRM $$ ) >/dev/null 2>&1 &
openssl s_client -connect "$host:$port" -brief 2>&1
3. Perl/Python Implementations
For more control, consider using language-specific SSL libraries with built-in timeout support:
# Python example
import socket
import ssl
from contextlib import contextmanager
@contextmanager
def ssl_timeout(seconds):
def handler(signum, frame):
raise Exception("SSL handshake timeout")
signal.signal(signal.SIGALRM, handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
try:
with ssl_timeout(3):
cert = ssl.get_server_certificate(('example.com', 443))
except Exception as e:
print(f"Failed: {str(e)}")
When processing hundreds of hosts:
- Parallelize connections (consider GNU parallel or xargs -P)
- Cache successful results to avoid re-checking
- Implement exponential backoff for retries
For troubleshooting:
openssl s_client -connect example.com:443 -debug -msg -state -tlsextdebug
Add -prexit
to show full certificate chain even on failure.
When automating certificate checks across multiple hosts using OpenSSL's s_client
, you'll inevitably encounter unreachable hosts due to firewalls or network issues. The default TCP connection timeout (often 2+ minutes) can significantly slow down your batch operations.
While s_client
doesn't have a direct timeout parameter, OpenSSL provides these alternatives:
# Using -timeout flag (only affects session timeout after connection)
openssl s_client -connect example.com:443 -timeout 5
# Using -verify_return_error to fail fast
openssl s_client -connect example.com:443 -verify_return_error -verify 1
For reliable timeout control, these wrapper methods work best:
# Bash timeout wrapper
timeout 5 openssl s_client -connect example.com:443 2>/dev/null
# Expect script wrapper
#!/usr/bin/expect -f
set timeout 5
spawn openssl s_client -connect example.com:443
expect "CONNECTED"
Here's a robust bash implementation that handles multiple hosts:
#!/bin/bash
hosts=("example.com:443" "unreachable-host.com:443" "another.example:993")
timeout_seconds=5
for host in "${hosts[@]}"; do
echo "Checking $host..."
output=$(timeout $timeout_seconds openssl s_client -connect "$host" 2>&1)
if [[ $? -eq 124 ]]; then
echo "TIMEOUT: $host after $timeout_seconds seconds"
elif [[ "$output" == *"CONNECTED"* ]]; then
echo "SUCCESS: $host"
# Extract certificate
echo "$output" | openssl x509 -noout -subject
else
echo "ERROR: $host - Connection failed"
fi
done
For certificate checking at scale, consider these alternatives:
- gnutls-cli with --timeout parameter
- curl with --connect-timeout (good for HTTPS checks)
- nmap with ssl-cert script
When scanning hundreds of hosts:
- Parallelize connections with xargs -P or GNU parallel
- Cache DNS lookups separately
- Consider asynchronous I/O implementations