When troubleshooting x11vnc performance issues, the 10% bandwidth utilization during screen updates presents a classic optimization puzzle. Unlike raw file transfers via SCP which saturate available bandwidth, x11vnc shows fundamentally different behavior due to its frame processing pipeline:
# Typical x11vnc command showing basic parameters
x11vnc -display :0 -shared -forever -noxdamage -rfbport 5900
The performance bottleneck typically occurs in one of these stages:
- Framebuffer capture latency
- X11 damage region processing
- Compression algorithm overhead
- Network buffer management
Enable verbose logging to identify time spent in each phase:
x11vnc -display :0 -o /tmp/x11vnc.log -ddelay 10 -debug_pointer 3 -debug_keyboard 3
Key metrics to analyze in logs:
FB read time: 12.4 ms JPEG compress: 8.2 ms Network send: 4.7 ms Total cycle: 25.3 ms (39.5 FPS)
1. Selective Frame Updates
Configure damage regions more aggressively:
x11vnc -noxdamage -xwarppointer -nodragging -threads
2. Compression Tuning
Experiment with different quality/performance tradeoffs:
x11vnc -quality 5 -quickness 3 # Lower quality, faster updates
x11vnc -quality 9 -noshm # Higher quality, shared memory disabled
3. Network Buffering
Adjust TCP window sizes for better throughput:
x11vnc -sb 1 -wait 10 -defer 5
For high-motion scenarios, consider these alternatives:
# TurboVNC variant
x11vnc -rawfb console -ncache 10 -wireframe
# With framebuffer compression
x11vnc -zlib -progressive
Use system tools to identify bottlenecks:
strace -T -ttt -o x11vnc.trace x11vnc [options]
perf stat -e cycles,instructions,cache-references x11vnc [options]
This reveals where CPU time is actually being spent during slow updates.
When troubleshooting x11vnc performance issues, the first anomaly that stands out is the network bandwidth utilization pattern. Unlike tools like SCP that saturate available bandwidth, x11vnc seems artificially constrained to about 10% of capacity during intensive operations like browser tab switching.
Start by gathering these diagnostic metrics:
# Check current x11vnc bandwidth usage iftop -i eth0 -f "port 5900" # Monitor framebuffer operations x11vnc -nap -wait 50 -sb 10 -debug_grabs -debug 3
The framebuffer read speed (601MB/s) suggests the capture mechanism isn't the issue. We should examine:
- Compression overhead (-zlib/-tight options)
- X damage notification delays (-xdamage)
- Color depth conversion time (-8/-16/-24/-32 options)
Try these x11vnc parameters to improve responsiveness:
x11vnc \ -nocursor \ -nodragging \ -threads \ -speeds modem \ -wait 10 \ -defer 10 \ -xdamage \ -nosel \ -noxfixes \ -shared \ -forever \ -tightfilexfer \ -progressive 100
Linux TCP tuning can help x11vnc better utilize available bandwidth:
# Increase TCP window sizes sysctl -w net.ipv4.tcp_window_scaling=1 sysctl -w net.core.rmem_max=16777216 sysctl -w net.core.wmem_max=16777216 # Enable TCP low latency mode sysctl -w net.ipv4.tcp_low_latency=1
If performance remains unsatisfactory, consider TurboVNC's modified x11vnc implementation:
sudo apt install turbovnc x11vnc -turbovnc -fastcopy -nocursorshape -wireframe
To profile individual frame processing stages:
x11vnc -debug_timings -pipeinput /tmp/vncdebug.log tail -f /tmp/vncdebug.log | grep "timing:"