Network switches operate on a fundamental principle: each port gets dedicated bandwidth while the switch fabric handles cross-port communication. When manufacturers specify 100Mbps or 1Gbps speeds, they're referring to per-port capacity, not the device's total throughput.
Consider this Python simulation of switch traffic (using scapy for demonstration):
from scapy.all import *
import threading
def simulate_transfer(src_port, dst_port):
pkt = Ether()/IP()/TCP()/("X"*1024) # 1KB test packet
sendp(pkt, iface=f"eth{src_port}", count=1000) # Simulate 1MB transfer
# Two parallel transfers
t1 = threading.Thread(target=simulate_transfer, args=(1,2))
t2 = threading.Thread(target=simulate_transfer, args=(3,4))
t1.start(); t2.start()
In this case, both transfers would achieve full port speed (assuming proper switch architecture) because:
- Ports 1→2 and 3→4 form separate collision domains
- Modern switches have full-duplex non-blocking backplanes
The critical components determining actual performance:
// Simplified switch architecture representation
struct Switch {
uint32_t backplane_bandwidth; // e.g., 48Gbps for 24-port gigabit
Port ports[MAX_PORTS];
MACTable forwarding_table;
};
Key metrics to verify:
# Linux switch info via ethtool
$ ethtool eth0 | grep -E "Speed|Duplex"
Speed: 1000Mb/s
Duplex: Full
Bandwidth sharing occurs in these scenarios:
- Multiple senders to single receiver (microbursts)
- Using port channels without proper LACP configuration
- Oversubscribed uplink ports
Example of monitoring contention:
# Monitor switch port utilization
$ snmpwalk -v2c -c public switch_ip IF-MIB::ifHCInOctets.101
IF-MIB::ifHCInOctets.101 = Counter64: 18446744073709551615
Feature | Consumer Switch | Enterprise Switch |
---|---|---|
Backplane ratio | 0.5:1 (oversubscribed) | 1:1 (non-blocking) |
Buffer memory | 128KB-1MB | 4MB-16MB |
QoS handling | Basic | 8-16 queues |
Network switches advertise speeds like 100Mbps or 1Gbps per physical port, not as an aggregate limit for the entire device. This means each Ethernet port can theoretically handle its maximum rated speed independently.
Modern switches use one of these backplane designs:
// Conceptual switch throughput calculation
const totalPorts = 24;
const portSpeed = 1; // Gbps
const backplaneCapacity = 48; // Gbps
function checkSimultaneousTransfer() {
return (totalPorts * portSpeed <= backplaneCapacity)
? "Non-blocking architecture"
: "Over-subscribed switch";
}
Consider these transfer scenarios on a 24-port 1Gbps switch:
- Non-blocking switch: 48Gbps backplane allows 24 simultaneous 1Gbps transfers
- Over-subscribed (2:1): 24Gbps backplane may cause contention with >12 active transfers
Use iPerf3 to measure actual throughput between multiple host pairs:
# Server setup (multiple instances)
iperf3 -s -p 5001
iperf3 -s -p 5002
# Client tests (parallel)
iperf3 -c server1 -p 5001 -t 30 -P 8 &
iperf3 -c server2 -p 5002 -t 30 -P 8 &
Key differentiators in switch performance:
Feature | Consumer Switch | Enterprise Switch |
---|---|---|
Backplane Ratio | 4:1 oversubscription | 1:1 non-blocking |
Buffer Memory | 128KB per port | 4MB+ per port |
Flow Control | Basic 802.3x | DCBX/ETS/PFC |
Optimize switch performance with these CLI commands (Cisco IOS example):
interface GigabitEthernet1/0/1
speed 1000
duplex full
flowcontrol receive on
spanning-tree portfast