Implementing a Circular Buffer for Log Files in Linux: Real-time 1GB Size Limit Without Rotation


1 views

Most Linux administrators reach for tools like logrotate when dealing with log file growth, but this approach has significant limitations for real-time systems. The conventional rotation method:

  • Requires periodic cron jobs (typically daily)
  • Involves full file rewrites during rotation
  • Creates multiple archived copies consuming disk space
  • Can't enforce strict size limits between rotations

The most efficient approach uses kernel mechanisms to implement true circular buffering:

Method 1: FIFO Special File

mkfifo /var/log/circular.log
# Then use in combination with tail and dd:
tail -f /var/log/circular.log | dd of=/var/log/circular.log bs=1M count=1024 conv=notrunc

Method 2: RAM Disk with Logging

# Create 1GB ramdisk
mkdir -p /var/log/ramdisk
mount -t tmpfs -o size=1g tmpfs /var/log/ramdisk

# Configure your application to log here
# Add this to /etc/fstab for persistence:
tmpfs /var/log/ramdisk tmpfs defaults,size=1g 0 0

For precise control, we can implement a circular buffer logger in Python:

import os

class CircularLog:
    def __init__(self, filename, max_size=1073741824):
        self.filename = filename
        self.max_size = max_size
        self._initialize_file()

    def _initialize_file(self):
        if not os.path.exists(self.filename):
            with open(self.filename, 'w') as f:
                f.write('')

    def write(self, message):
        current_size = os.path.getsize(self.filename)
        message_size = len(message.encode('utf-8'))
        
        if current_size + message_size > self.max_size:
            with open(self.filename, 'r+') as f:
                content = f.read()
                # Calculate how much to trim from beginning
                excess = (current_size + message_size) - self.max_size
                if excess > 0:
                    content = content[excess:]
                f.seek(0)
                f.truncate()
                f.write(content + message)
        else:
            with open(self.filename, 'a') as f:
                f.write(message)

# Usage:
logger = CircularLog('/var/log/app.log', max_size=1073741824)
logger.write("New log entry\n")

When implementing circular logging, monitor these key metrics:

  • I/O wait times during buffer trimming
  • Memory usage for in-memory solutions
  • CPU overhead for continuous file operations

For production systems, consider these optimizations:

# Use O_DIRECT for direct I/O (bypass page cache)
fd = os.open('/var/log/app.log', os.O_WRONLY | os.O_CREAT | os.O_DIRECT)

# Implement batch writes instead of single-line writes
batch = []
def flush_batch():
    if batch:
        logger.write(''.join(batch))
        batch.clear()

Modern Linux systems can leverage systemd-journald:

# Configure in /etc/systemd/journald.conf
[Journal]
SystemMaxUse=1G
RuntimeMaxUse=1G
SystemKeepFree=100M
RuntimeKeepFree=100M

While tools like logrotate serve their purpose for scheduled log maintenance, they fall short when you need continuous log management with strict size constraints. The fundamental issues are:

  • Batch processing creates IO spikes during rotation
  • Daily rotations don't prevent uncontrolled log growth
  • Rotated logs still consume disk space until cleanup

We can implement this using Unix pipes and buffer management without special devices:

# Basic FIFO implementation example
mkfifo /var/log/circular.log.pipe
( while true; do cat /var/log/circular.log.pipe >> /var/log/circular.log; \
  [ $(stat -c%s /var/log/circular.log) -gt 1073741824 ] && \
  tail -c 1073741824 /var/log/circular.log > /tmp/circular.tmp && \
  mv /tmp/circular.tmp /var/log/circular.log; done ) &

For production systems, configuring syslog-ng provides better reliability:

destination d_circular {
    file("/var/log/circular.log"
        template("${ISODATE} ${HOST} ${MESSAGE}\n")
        log-fifo-size(10000)
        log-fifo-size(1G)
        overwrite-if-older(86400)
    );
};

log {
    source(s_src);
    destination(d_circular);
};

For extreme performance requirements, consider kernel ring buffers:

# Create 1GB ring buffer (requires kernel module)
sudo modprobe loop max_loop=1 max_part=1
dd if=/dev/zero of=/circular.bin bs=1G count=1
losetup -f /circular.bin
mkfs.ext4 /dev/loop0
mount /dev/loop0 /var/log/circular

Key implementation factors to remember:

Solution Max Size IO Overhead Complexity
FIFO Pipe Configurable Moderate Low
syslog-ng 1G+ Low Medium
Kernel Ring Memory-limited Very Low High

For high-volume logging systems, combine multiple techniques:

# Combined buffer management script
LOG_FILE="/var/log/application.log"
MAX_SIZE=$((1024*1024*1024))
CHUNK_SIZE=1048576

while true; do
    inotifywait -q -e modify "$LOG_FILE"
    CURRENT_SIZE=$(stat -c%s "$LOG_FILE")
    
    if [ "$CURRENT_SIZE" -gt "$MAX_SIZE" ]; then
        TRIM_SIZE=$((CURRENT_SIZE - MAX_SIZE + CHUNK_SIZE))
        tail -c +$TRIM_SIZE "$LOG_FILE" > "${LOG_FILE}.tmp"
        mv "${LOG_FILE}.tmp" "$LOG_FILE"
    fi
done