When transitioning from file-based logging to database logging with MySQL, developers often miss the convenience of tail -f
for real-time monitoring. Our Java application using logback now stores logs in MySQL, but we need a way to stream new entries as they're inserted.
Here are several approaches to implement tail-like functionality for MySQL tables:
1. Using MySQL Triggers and External Scripts
Create a trigger that notifies an external process when new rows are inserted:
DELIMITER //
CREATE TRIGGER log_alert
AFTER INSERT ON application_logs
FOR EACH ROW
BEGIN
-- This could write to a file that another process monitors
SELECT NEW.id, NEW.message INTO OUTFILE '/tmp/log_updates'
FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n';
END //
DELIMITER ;
2. Polling with Last ID Tracking
A simple Java solution using JDBC:
public class LogTail {
private static long lastId = getMaxId();
public static void main(String[] args) throws SQLException, InterruptedException {
String query = "SELECT * FROM application_logs WHERE id > ? ORDER BY id ASC";
try (Connection conn = DriverManager.getConnection(DB_URL);
PreparedStatement stmt = conn.prepareStatement(query)) {
while (true) {
stmt.setLong(1, lastId);
ResultSet rs = stmt.executeQuery();
while (rs.next()) {
lastId = rs.getLong("id");
System.out.println(rs.getString("message"));
}
Thread.sleep(1000); // Poll every second
}
}
}
private static long getMaxId() {
// Implementation to get current max ID
}
}
3. Using MySQL Binlog
For more advanced solutions, consider parsing the MySQL binary log:
mysqlbinlog --read-from-remote-server --host=localhost --user=root --password \
--start-position=4 --stop-never mysql-bin.000001
When implementing database tailing:
- Add proper indexing on timestamp/ID columns
- Consider read replicas for production monitoring
- Batch processing may be more efficient than single-row polling
- Monitor the impact on database performance
For production environments, consider these specialized tools:
- Maxwell's Daemon (reads MySQL binlog)
- Debezium (change data capture platform)
- Kafka Connect with JDBC Source Connector
When transitioning from file-based logging to database logging (particularly with MySQL), many developers miss the convenience of tail -f
for monitoring logs in real-time. While MySQL doesn't have a native tail -f
equivalent, we can implement similar functionality through several approaches.
For applications using MySQL's binary logging, you can monitor changes with:
mysqlbinlog --read-from-remote-server --host=localhost --user=root --password \
--raw --stop-never mysql-bin.000001
A simple Java implementation using timestamp polling:
public void tailLogTable(Connection conn) throws SQLException {
Timestamp lastSeen = new Timestamp(System.currentTimeMillis() - 60000);
String query = "SELECT * FROM app_logs WHERE created_at > ? ORDER BY created_at";
try (PreparedStatement stmt = conn.prepareStatement(query)) {
while (true) {
stmt.setTimestamp(1, lastSeen);
ResultSet rs = stmt.executeQuery();
while (rs.next()) {
System.out.println(rs.getString("log_message"));
lastSeen = rs.getTimestamp("created_at");
}
Thread.sleep(1000);
}
}
}
Create a trigger that writes to a secondary table, then monitor that table:
CREATE TRIGGER log_trigger AFTER INSERT ON app_logs
FOR EACH ROW BEGIN
INSERT INTO log_notifications (log_id, created_at)
VALUES (NEW.id, NOW());
END;
For MySQL 8.0+, you can use the JSON binary log format with a Kafka connector:
# Debezium configuration example
{
"name": "log-connector",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.hostname": "localhost",
"database.port": "3306",
"database.user": "debezium",
"database.password": "password",
"database.server.id": "184054",
"database.server.name": "logserver",
"database.include.list": "logging_db",
"table.include.list": "logging_db.app_logs",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "schema-changes.logging_db"
}
}
When implementing database log tailing:
- Add proper indexing on timestamp columns
- Consider read replicas for production monitoring
- Implement connection pooling
- Batch log reads when possible