UNIX Consulting and Expertise
Golden Apple Enterprises Ltd. » Page 'Easy Solaris log file management with logadm'

Easy Solaris log file management with logadm

Logfile management has long been a bane for sysadmins everywhere. Applications seem to scatter logfiles all over the place, and they grow at an alarming rate. We want the information in them, so we need to cycle and compress them. Previously this involved writing custom scripts that can handle the logfile management and restarting the application – and to add to the pain, these scripts had to be tested, deployed, and monitored.

Luckily Solaris comes with a handy utility called logadm, which is used by the OE to manage some of the core system log files. logadm can quickly and easily be used to handle all of our log file managment needs.

Let’s look at two log files which aren’t handled by Solaris out of the box – sulog and wtmpx. Both are important, as they help us with our user access audit trail. For starters, we want to keep two old copies of each, and we want to cycle them every two weeks.

Our logadm syntax looks like this:

/usr/sbin/logadm -C 2 -p 2w -c -w <full_path_to_logfile>
  • -C number of copies to keep
  • -p time between each log cycle (2 weeks)
  • -c copy and them truncate (doesn’t need to restart a service then)
  • -w writes an entry into /etc/logadm.conf for this log file

So executing the following:

bash-3.00# logadm -C 2 -p 2w -c -w /var/adm/sulog 
bash-3.00# logadm -C 2 -p 2w -c -w /var/adm/wtmpx

will result in the following two lines being written to the end of /etc/logadm.conf

/var/adm/wtmpx -C 2 -c -p 2w
/var/adm/sulog -C 2 -c -p 2w

Getting logadm to add an entry to /etc/logadm.conf means that this won’t be a one-off thing – each time logadm executes from cron, it will read the entries from this file. Each entry is checked to see if the log file’s size or age means it’s due for rotation.

wtmpx is obviously a binary file that’s used by last – rather than having to restart the utmpd daemon, it’s easier to just truncate the file. last can still read the older copies – just use the syntax

last -f <wtmpx_file>

It’s important to properly cycle wtmpx, rather than just deleting or truncating it, because it provides a helpful audit trail of users who accessed the system – showing when they logged in, and from where.

This is great if we just want to cycle logs around – but what if we want to compress them as well? Apache is the poster child for log generation – it spits out copious amounts of data, and you want to keep it all for analysis, but it’s a pain to manage.

On my test machine I’ve deployed Apache via Blastwave, so it’s logging to /opt/csw/apache2/var

With SSL enabled there are five log files that I’m interested in:

bash-3.00# ls -l
total 31536
-rw-r--r--   1 root     other    5400214 Oct 30 16:26 access_log
-rw-r--r--   1 root     other    8716843 Oct 29 23:18 error_log
-rw-r--r--   1 root     root      760298 Oct 30 16:26 ssl_access_log
-rw-r--r--   1 root     root      268873 Oct 30 16:26 ssl_error_log
-rw-r--r--   1 root     root      934541 Oct 30 16:26 ssl_request_log

We can just modify the previous logadm command to handle wtmpx – but what about the compression? Helpfully logadm will automatically compress cycled log files using gzip, if we pass it the -z flag. -z will also take a count option, which tells logadm to leave the most recent logfiles uncompressed.

In this case, however, we want everything except the current in-flight log file compressed, and we want to cycle when the logfile reaches 10mb is size:

bash-3.00# logadm -C 10 -s 10m -c -z 0 -w /opt/csw/apache2/var/log/access_log 

logadm drops an entry into /etc/logadm.conf for us:

/opt/csw/apache2/var/log/access_log -C 10 -c -s 10m -z 0

Add an entry for each of the five log files, and we end up with this in /etc/logadm.conf:

/opt/csw/apache2/var/log/access_log -C 10 -c -s 10m -z 0
/opt/csw/apache2/var/log/error_log -C 10 -c -s 10m -z 0
/opt/csw/apache2/var/log/ssl_access_log -C 10 -c -s 10m -z 0
/opt/csw/apache2/var/log/ssl_error_log -C 10 -c -s 10m -z 0
/opt/csw/apache2/var/log/ssl_request_log -C 10 -c -s 10m -z 0

Using a combination of log cycling and compression, logadm can handle pretty much any application’s log files for us. By using copy and truncate as well, we aren’t forced to restart each application when we cycle the logs, which ends up giving us a huge amount of control over our log files, without having to write and maintain shell scripts.

Like this post? Spread the word!
delicious digg google
stumbleupon technorati Yahoo!

One comment to “Easy Solaris log file management with logadm”

  1. I’m approximately 2 years late but thanks for a very helpful article!

Top of page / Subscribe to new Entries (RSS)