When running long-term processes that generate continuous output, developers often face a dilemma: either watch the live output (which quickly becomes unwieldy) or redirect to a file (which introduces buffering delays). The standard output redirection (> out.txt) typically uses line buffering for terminal output but switches to full buffering when redirected to files.
For system-wide unbuffered output, consider these approaches:
# Method 1: Using stdbuf
stdbuf -o0 ./long_running_process > out.txt
# Method 2: Using unbuffer (expect package)
unbuffer ./long_running_process > out.txt
# Method 3: Using script
script -c "./long_running_process" -f out.txt
For C/C++ Programs
#include <stdio.h>
int main() {
setvbuf(stdout, NULL, _IONBF, 0); // Disable buffering
while(1) {
printf("Real-time data point\n");
sleep(1);
}
return 0;
}
For Python Programs
import sys
import time
sys.stdout = open('output.log', 'w', buffering=1) # Line buffering
while True:
print("Current status update")
time.sleep(1)
For situations where you can't modify the program or redirection method:
# Use tail with follow mode
tail -f out.txt
# Combine with grep for filtering
tail -f out.txt | grep "ERROR"
# Use less with follow mode
less +F out.txt
While unbuffered I/O provides immediate visibility, it comes with performance costs. For high-frequency output (1000+ lines per second), consider:
- Using larger buffer sizes as compromise (e.g., setvbuf with _IOLBF)
- Implementing periodic flush operations instead of complete unbuffering
- Separating critical debug output from normal logging
When running long-term batch processes that generate continuous output, developers often face a dilemma:
- Terminal display shows real-time progress but loses history due to scroll limits
- File redirection (using
> output.log
) buffers the output, making monitoring impossible
Unix/Linux systems typically use line buffering for terminal output (showing each line immediately) but switch to block buffering for file redirection (waiting to accumulate data before writing). This optimization improves performance but hinders real-time monitoring.
The stdbuf
utility from GNU Coreutils provides the most straightforward solution:
stdbuf -oL your_command > output.log
Key options:
-oL
: Line buffering for stdout-eL
: Line buffering for stderr-o0
: No buffering (may impact performance)
For Python scripts, force unbuffered mode:
python -u your_script.py > output.log
Or set these environment variables:
export PYTHONUNBUFFERED=1
export PYTHONIOENCODING=UTF-8
Combine real-time viewing with file logging:
your_command | tee output.log
For continuous appending:
your_command | tee -a output.log
Many programs support their own unbuffered modes:
- Perl:
$| = 1;
(autoflush) - Java:
System.out.flush()
after each print - C:
setvbuf(stdout, NULL, _IONBF, 0);
While unbuffered output provides real-time visibility, it comes with tradeoffs:
- 4-10x more system calls
- Potential disk I/O bottlenecks
- Increased CPU usage
For high-frequency output (>100 lines/sec), consider periodic flushing instead of complete unbuffering.
For long-running processes, implement log rotation:
your_command | multilog t s1000000 n10 ./logs
This creates 1MB log files, keeping the last 10 versions.