When working with CoreOS and Docker containers, getting journald logs into a consumable format for tools like Logstash presents unique challenges. The native binary format of journald logs requires conversion before processing in most log management systems.
While your proposed solution using journalctl -f --json | tee
works, it has reliability concerns:
# Basic implementation
journalctl -f --output=json | tee -a /var/log/journald.json
Potential issues include:
- No automatic restart on failure
- Potential log corruption if the process terminates unexpectedly
- No built-in log rotation
Here's a more reliable implementation using a systemd service:
[Unit]
Description=Journald to JSON Log Exporter
After=network.target
[Service]
Restart=always
ExecStart=/bin/sh -c 'journalctl -f --output=json >> /var/log/journald.json'
StandardOutput=null
[Install]
WantedBy=multi-user.target
For those preferring syslog-ng, here's a working container setup that avoids the "Address already in use" error:
# docker-compose.yml version
version: '3'
services:
syslog-ng:
image: balabit/syslog-ng
volumes:
- ./syslog-ng.conf:/etc/syslog-ng/syslog-ng.conf
- /run/systemd/journal:/run/systemd/journal:ro
command: /usr/sbin/syslog-ng -F
With this syslog-ng configuration:
source s_journald {
systemd-journal();
};
destination d_file {
file("/var/log/messages");
};
log {
source(s_journald);
destination(d_file);
};
Whichever method you choose, implement log rotation:
# /etc/logrotate.d/journald-export
/var/log/journald.json {
daily
rotate 7
compress
delaycompress
missingok
notifempty
}
For high-volume systems, consider these tweaks:
# Journald config (/etc/systemd/journald.conf)
[Journal]
RateLimitInterval=0
RateLimitBurst=0
SystemMaxUse=1G
When working with CoreOS and Docker containers, extracting journald logs for processing in Logstash presents unique challenges. The traditional syslog approach often conflicts with systemd's journald implementation, especially when attempting to mount sockets in Docker containers.
Here are two proven methods to achieve log forwarding:
Method 1: Using journald's ForwardToSyslog Feature
Configure journald to forward logs directly to syslog:
# /etc/systemd/journald.conf
[Journal]
ForwardToSyslog=yes
Storage=persistent
Then configure syslog-ng to receive these logs:
source s_journald {
systemd-journal();
};
destination d_file {
file("/var/log/syslog-ng/journald.log");
};
log {
source(s_journald);
destination(d_file);
};
Method 2: Direct Journalctl Output with JSON Formatting
For a lightweight solution, consider this systemd service unit:
# /etc/systemd/system/journald-to-file.service
[Unit]
Description=Journald to File Forwarder
[Service]
ExecStart=/bin/sh -c '/usr/bin/journalctl -f --output=json | tee -a /var/log/journald/journald.log'
Restart=always
[Install]
WantedBy=multi-user.target
When running in Docker containers, ensure proper volume mounts and permissions:
docker run -d \
--name syslog-ng \
-v /var/log/journald:/var/log/journald \
-v /run/systemd/journal:/run/systemd/journal \
your-syslog-ng-image
- For high-volume systems, consider log rotation
- Monitor disk space when using persistent storage
- JSON formatting ensures compatibility with Logstash
- Test failure scenarios (container restarts, network issues)
Here's a sample Logstash config to process the forwarded logs:
input {
file {
path => "/var/log/journald/*.log"
codec => json
sincedb_path => "/dev/null"
}
}
filter {
json {
source => "message"
target => "journald"
}
}