When architecting solutions on AWS EC2, network bandwidth is often a critical but poorly documented factor. The baseline 35Mbps limitation for smaller instances is just the tip of the iceberg - actual performance varies dramatically by instance family and size.
Here's the current landscape as of 2023 (always verify with latest AWS docs):
// Sample code to programmatically check instance bandwidth
const instanceTypeBandwidth = {
't3.nano': 'Up to 5 Gbps (burst)',
'm5.large': 'Up to 10 Gbps',
'c5.2xlarge': 'Up to 10 Gbps',
'r5.4xlarge': 'Up to 10 Gbps',
'm5.8xlarge': '10 Gbps',
'c5n.9xlarge': '50 Gbps',
// Enhanced networking required for maximum bandwidth
'i3en.6xlarge': '25 Gbps',
'p4d.24xlarge': '400 Gbps'
};
While ELB helps distribute traffic, your instance bandwidth still determines maximum throughput per server. Consider these architectural patterns:
# Python example for load-aware scaling
def should_scale_out(current_bandwidth, instance_max):
return (current_bandwidth / instance_max) > 0.7 # Scale at 70% utilization
- Enable ENA (Elastic Network Adapter) for supported instances
- Use Placement Groups for low-latency, high-throughput requirements
- Consider EC2 Accelerated Networking for supported instance types
Don't trust theoretical maximums - always test with tools like iperf3:
# Simple bandwidth test between two EC2 instances
$ iperf3 -c target-instance-ip -t 60 -P 8
# -P 8 creates 8 parallel streams for accurate measurement
The c5n.9xlarge offers 50Gbps but costs ~$2/hr, while a t3.small might suffice for many web apps at ~$0.02/hr. Benchmark your actual needs before over-provisioning.
AWS EC2 instances have varying network bandwidth capabilities that scale with instance size and family. The network performance isn't explicitly documented for all instances, but through extensive testing and AWS documentation analysis, we've compiled these observations:
// Example CLI command to test network bandwidth
curl -o /dev/null http://example.com/largefile.zip
# Monitor transfer speed using ifstat or nload during the download
Here's the current landscape of EC2 network performance:
- T2/T3 (Burstable): ~5Gbps burst, baseline typically 35-100Mbps depending on credits
- M5/C5 (General Purpose): Up to 10Gbps for largest instances
- R5 (Memory Optimized): 10Gbps standard, up to 25Gbps for .16xlarge+
- I3 (Storage Optimized): 10Gbps baseline, 25Gbps for i3.16xlarge
- P3/G4 (Accelerated): 10-100Gbps depending on instance size
Use this Python script to measure actual throughput:
import speedtest
import time
def benchmark_network():
servers = []
threads = 1
s = speedtest.Speedtest()
s.get_servers(servers)
s.get_best_server()
start = time.time()
s.download(threads=threads)
s.upload(threads=threads)
end = time.time()
return {
'download': s.results.download / (1024 * 1024),
'upload': s.results.upload / (1024 * 1024),
'ping': s.results.ping,
'duration': end - start
}
print(benchmark_network())
When using Elastic Load Balancing:
- Application Load Balancers scale to 25Gbps per AZ
- Network Load Balancers handle millions of requests/sec
- Always deploy ALBs/NLBs in multiple AZs for redundancy
Key strategies for maximizing throughput:
# Enable Enhanced Networking (SR-IOV)
aws ec2 modify-instance-attribute \
--instance-id i-1234567890abcdef0 \
--ena-support
Additional recommendations:
- Use placement groups for low-latency requirements
- Consider EC2 Instance Connect for better SSH performance
- Monitor network credits for burstable instances
Essential CloudWatch metrics to track:
aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name NetworkIn \
--dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
--start-time 2023-01-01T00:00:00Z \
--end-time 2023-01-02T00:00:00Z \
--period 3600 \
--statistics Average