When delivering content directly from an EC2 Windows instance, the network interface can indeed become your primary bottleneck. Standard EC2 instances typically have network performance that scales with instance size:
// Example: Checking network bandwidth in Windows EC2
Get-NetAdapter | Select-Object Name, LinkSpeed
// Typical output for t3.medium instance:
Name LinkSpeed
---- --------
Ethernet 100 Mbps
Amazon S3 operates fundamentally differently from EC2 in terms of bandwidth:
- Automatic scaling to handle virtually unlimited requests
- Distributed architecture across Availability Zones
- No single-point bandwidth limitation
For scenarios where same-origin policy prevents S3 usage:
// Option 1: Reverse proxy configuration (Nginx example)
server {
listen 80;
server_name yourdomain.com;
location /videos/ {
proxy_pass https://your-bucket.s3.amazonaws.com/;
proxy_set_header Host your-bucket.s3.amazonaws.com;
}
}
// Option 2: Signed URLs with temporary access
// (AWS SDK for JavaScript example)
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const params = {
Bucket: 'your-bucket',
Key: 'video.mp4',
Expires: 3600 // URL expires in 1 hour
};
s3.getSignedUrl('getObject', params, (err, url) => {
console.log('Temporary URL:', url);
});
For 100 concurrent users streaming 5MB videos:
Instance Type | Network Performance | Max Theoretical Throughput |
---|---|---|
t3.medium | Up to 5 Gbps | ~625 MB/s (burst) |
m5.large | Up to 10 Gbps | ~1.25 GB/s |
c5.2xlarge | Up to 10 Gbps | ~1.25 GB/s |
Use this PowerShell script to measure actual throughput:
# Windows EC2 Network Test Script
$testFile = "C:\test\100MB.bin"
$webServer = "http://your-server.com/test"
Measure-Command {
Invoke-WebRequest -Uri $webServer -OutFile $testFile
} | Select-Object TotalSeconds
# Calculate MB/s: (100 / TotalSeconds)
For dynamic content that requires same-origin access:
// PHP example for hybrid delivery
function getVideoStream($videoId) {
if (shouldUseS3($videoId)) {
header("Location: " . generateS3SignedUrl($videoId));
} else {
readfile("/local/videos/" . $videoId);
}
}
// Helper function to determine delivery method
function shouldUseS3($videoId) {
$popularVideos = ['intro.mp4', 'demo.mp4'];
return in_array($videoId, $popularVideos);
}
Your observation about the 100Mbps network limit on your Windows EC2 instance is correct. AWS EC2 instances have instance-type-dependent bandwidth caps. For example:
# AWS CLI command to check instance network specs
aws ec2 describe-instance-types \
--instance-types t3.large \
--query "InstanceTypes[].NetworkInfo" \
--output table
Typical bandwidth allocations:
- t3.nano: Up to 5Gbps (burst)
- t3.large: Up to 5Gbps (burst)
- m5.large: Up to 10Gbps
- c5n.4xlarge: Up to 25Gbps
With your scenario (100 users × 5MB × 20 videos):
# Bandwidth calculation
total_bandwidth = users × (video_size × videos_per_user) × 8 bits/byte
= 100 × (5MB × 20) × 8
= 80,000Mbps (or 80Gbps) - theoretical max
In reality, you'd need to consider:
- Concurrent users (not all 100 may be active simultaneously)
- Video compression (H.264/HEVC can reduce sizes)
- Progressive loading (users rarely download all 20 at once)
S3 provides:
- No per-instance bandwidth limits
- Multi-AZ redundancy
- Built-in CDN via CloudFront
For your AJAX CORS issue, consider:
// Example S3 CORS configuration
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET"],
"AllowedOrigins": ["https://yourdomain.com"],
"ExposeHeaders": []
}
]
Workaround for same-domain requirement:
// Nginx reverse proxy config example
server {
listen 80;
server_name yourdomain.com;
location /videos/ {
proxy_pass https://your-bucket.s3.amazonaws.com/;
proxy_set_header Host your-bucket.s3.amazonaws.com;
}
}
To monitor EC2 network usage:
# CloudWatch metrics to watch
aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name NetworkOut \
--dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
--statistics Average \
--period 60 \
--start-time 2023-01-01T00:00:00Z \
--end-time 2023-01-01T23:59:59Z
For immediate relief:
- Enable EC2 Enhanced Networking (requires HVM AMI)
- Use Elastic Network Adapter (ENA) drivers
- Consider placement groups for low-latency
When ready to migrate from EC2:
# CloudFront+S3 setup via Terraform
resource "aws_cloudfront_distribution" "video_dist" {
origin {
domain_name = "${aws_s3_bucket.videos.bucket_regional_domain_name}"
origin_id = "S3-${aws_s3_bucket.videos.id}"
}
enabled = true
default_root_object = "index.html"
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-${aws_s3_bucket.videos.id}"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
}
}