How to Properly Read and Parse /var/log/lastlog on CentOS 6.5 (30GB File Handling)


2 views

The /var/log/lastlog file is a binary database that records the last login times of all users. Unlike regular text logs, this file uses a fixed-length record format where each user's UID corresponds to a specific offset in the file.

When you try to open lastlog with Vim or other text editors:

  • The 30GB size exceeds most editors' memory limits
  • Binary data doesn't display meaningfully in text mode
  • No useful structure appears without proper parsing

For CentOS 6.5, use these built-in commands:

# View all entries
lastlog

# View specific user (e.g., root)
lastlog -u root

# View entries since specific date
lastlog -t 7   # Last 7 days

For programmatic access, this Perl script reads lastlog directly:

#!/usr/bin/perl
use strict;
use warnings;

my $lastlog = '/var/log/lastlog';
open(my $fh, '<:raw', $lastlog) or die "Cannot open $lastlog: $!";

while (read($fh, my $record, 292)) {  # Standard lastlog record size
    my ($time, $line, $host) = unpack('L A32 A256', $record);
    printf "Timestamp: %s, Line: %s, Host: %s\n",
        scalar(localtime($time)), $line, $host if $time;
}
close($fh);

For partial reads of huge files:

# View first 100 records (29.2KB)
dd if=/var/log/lastlog bs=292 count=100 | hexdump -C

# View last 100 records (works even for 30GB)
tail -c 29200 /var/log/lastlog | hexdump -C

Remember:

  • The file may contain sensitive login information
  • Regular users can only see their own last login
  • Root access required for full inspection
  • Consider ausearch for audit logs as alternative

When dealing with system logs on CentOS, the /var/log/lastlog file presents unique challenges due to its binary format and potentially massive size (30GB in this case). Traditional text editors like VIM fail to display its contents properly because:

  • It's a binary database format, not plain text
  • The structure follows fixed-length records
  • It contains timestamp data in binary form

Instead of text editors, use these specialized tools:

# Method 1: Using lastlog command
lastlog

# Method 2: Using awk with binary-safe processing
hexdump -C /var/log/lastlog | head -n 50

# Method 3: For custom parsing (Python example)
import struct

with open('/var/log/lastlog', 'rb') as f:
    while True:
        record = f.read(292)  # Standard lastlog record size
        if not record:
            break
        # Unpack the binary data (format may vary by system)
        data = struct.unpack('i32s256s', record)
        print(f"UID: {data[0]}, Terminal: {data[1].decode().strip('\\x00')}, Host: {data[2].decode().strip('\\x00')}")

For 30GB files, consider these performance optimizations:

# Use the 'lastlog' command with filters
lastlog -u 1000-2000  # Only show users with UID range 1000-2000

# Process in chunks (Bash example)
dd if=/var/log/lastlog bs=292 count=1000 | hexdump -C

# Alternative C program for better performance
#include <stdio.h>
#include <stdlib.h>

struct lastlog {
    int32_t ll_time;
    char ll_line[32];
    char ll_host[256];
};

int main() {
    FILE *fp = fopen("/var/log/lostlog", "rb");
    struct lastlog entry;
    
    while (fread(&entry, sizeof(entry), 1, fp) == 1) {
        printf("Time: %d, Line: %s, Host: %s\n",
               entry.ll_time,
               entry.ll_line,
               entry.ll_host);
    }
    fclose(fp);
    return 0;
}

When working with lastlog:

  • The file is typically owned by root - use sudo when needed
  • Make copies for analysis rather than working on the live file
  • Be aware lastlog may contain sensitive login information

For long-term solutions, consider setting up proper log rotation in /etc/logrotate.conf to prevent lastlog from growing too large.