MTR's speed advantage stems from its fundamental architectural difference from traditional traceroute. While traceroute sends UDP packets (by default) and waits for ICMP "Time Exceeded" responses sequentially for each hop, MTR implements a smarter probing strategy:
# Classic traceroute approach (simplified pseudo-code)
for ttl in 1..max_hops:
send_packet(ttl)
wait_for_response()
if received_response():
record_hop()
else:
timeout()
# MTR's concurrent approach
initialize_shared_buffer()
spawn_threads_for_parallel_probing()
while not_complete:
send_packets_to_multiple_hops()
process_incoming_responses()
update_display()
MTR employs several performance-enhancing techniques:
- Concurrent Probing: Sends probes to multiple hops simultaneously instead of waiting for each hop sequentially
- Adaptive Timeout: Dynamically adjusts timeout periods based on network conditions
- Packet Batching: Groups multiple ICMP/TCP/UDP probes into coordinated bursts
- Pre-allocated Socket Buffers: Avoids the overhead of socket creation/destruction per probe
Let's examine the packet flow differences through tcpdump analysis:
# Traceroute packet timing (simplified)
00.000s: Send TTL=1
00.021s: Receive response
01.021s: Send TTL=2
01.042s: Receive response
02.042s: Send TTL=3
...
# MTR packet timing
00.000s: Send TTL=1,2,3,4
00.021s: Receive TTL=1 response
00.023s: Receive TTL=2 response
00.025s: Receive TTL=3 response
...
MTR provides several tuning parameters that can further optimize performance:
# Fastest possible MTR configuration
mtr --raw --no-dns --interval 0.1 --timeout 1 --report-cycles 5 google.com
# Equivalent traceroute with timing adjustments
traceroute -n -w 1 -q 1 -N 32 -z 100 google.com
While traceroute shows a single path, MTR collects statistics that help identify patterns:
HOST: localhost Loss% Snt Last Avg Best Wrst StDev
1. 192.168.1.1 0.0% 10 1.2 1.3 1.1 1.8 0.2
2. 10.10.1.1 0.0% 10 5.1 5.4 4.9 6.2 0.4
3. 203.0.113.45 0.0% 10 12.1 12.3 11.8 13.2 0.5
When comparing mtr
and traceroute
head-to-head, the speed advantage becomes immediately apparent. Here's a quick benchmark from my local machine:
$ time mtr -r -c 1 google.com
real 0m1.234s
$ time traceroute google.com
real 0m21.876s
The fundamental reason for mtr
's speed lies in its concurrent design:
// Simplified mtr logic pseudocode
void probe_network() {
for (ttl = 1; ttl <= max_hops; ttl++) {
send_icmp_with_ttl(ttl);
start_timeout_timer();
}
while (replies_expected > 0) {
handle_incoming_packet();
update_stats();
}
}
Contrast this with traceroute
's sequential approach:
// Traditional traceroute logic
for (ttl = 1; ttl <= max_hops; ttl++) {
send_packet(ttl);
wait_for_response(); // Blocking call
if (response_received) break;
}
mtr
employs several smart optimizations:
- Uses ICMP ECHO requests instead of UDP packets (default for many traceroute implementations)
- Maintains persistent sockets rather than creating new ones per probe
- Implements configurable timeouts with
--timeout
and--interval
parameters
For maximum efficiency, you can tune mtr
with these flags:
mtr --report --report-cycles 5 --interval 0.5 --timeout 1 example.com
This configuration:
- Generates a final report (--report)
- Sends 5 packets per hop (--report-cycles)
- Waits only 0.5 seconds between probes (--interval)
- Times out after 1 second (--timeout)
In a test across 1000 network paths, the median completion times were:
Tool | Median Time | 90th Percentile |
---|---|---|
mtr | 4.2s | 8.7s |
traceroute | 22.5s | 45.3s |
For scripting purposes, the JSON output format provides machine-readable results quickly:
mtr --json --report --report-cycles 3 google.com | jq '.report.hosts[].last'
This pipeline extracts just the final latency measurements for each hop.