This utility is provided as part of the package to make it easy to parse messages from system logs such as the /var/log/messages file, or any other key logs that may contain messages for which you want to generate or cancel alerts.
It reads messages from stdin, compares them against the message rules you have defined in your rule configuration file covered in the next chapter, and if approprite writes a message to stdout indicating if an alert is to be raised or cancelled.
The output written for scripts to process will be one of...
RAISE-ALERT key-value "message-text" CANCEL-ALERT key-value "message-text" USERFUNC-1 key-value "message-text" USERFUNC-2 key-value "message-text" USERFUNC-3 key-value "message-text" USERFUNC-4 key-value "message-text" USERFUNC-5 key-value "message-text"16Jan2004 Update: there will now be a fourth output parameter written if you use the new USEHOSTNAME= option discussed later. |
It does it this way as your script to process the output may want to invoke other functions rather than just forward an alert to my alert collection system, you may want to forward it to an inhouse alert tool for example.
The USERFUNC user functions are provided to allow you to do some processing other than just raise/cancel alerts, for example to record key security messages in a seperate history file or pherhaps provide some automation at this point.
This application as shipped will handle up to fifty (50) non-comment configuration entries tp define messages. If you require more adjust the constant in the supplied C source for this tool and recompile it.
First and most importantly, use one or more SPACES to seperate the fields. Do not use TABs.
The values do not have to be in fixed columns, but must be seperated by at least one space. If any of the data entered is too large it is truncated.
The configuration file is expected to have four(4) fields
on each non-comment input line. (Comment lines begin
with #).
16Jan2004 Update: A non-standard field USEHOSTNAME=
has been provided, this is discussed later.
The four standard fields are...
|
16jan2004 Update
A new nonstandard field entry is now permitted in the
configuration file, and supported by the log_parser program
and supplied sample processing scripts.
This is USEHOSTNAME=nn where nn
is the field number the hostname is to be extracted from.
The nnvalue must be in the range 1 through 15.
If this option is used the hostname extracted from the message
wil be written as an additional trailing value in each output
line from the log_parser program.
If the option is not used you will not see the extra, and
for most sites surperfluous parameter.
It was added for my needs as I now use a central loghost and I want
to know the hostname of the server that created the message rather
than just the hostname of the server the log tailing task is
running on.
This is a small sample of a log parser tool configuration file (the top portion of the sample provided).
# # Comment out userhostname if you do not need this. # Provide at the end of the output line the hostname from field 4 USEHOSTNAME=4 # # Start # Action Alert Key Offset Text to scan for # # Samba browser contention errors A SAMBA 35 Forcing election. A SAMBA 35 has stopped being a local master X SAMBA 35 now a local master browser # # Where entries may be similar put the most explicit at the top as the # rule parsing stops at the first match (ie: the logname= one would # if placed first prevent the logname=mark one from being reached). # Linux (pam) su errors. A UNIX_LGN_MARK 43 authentication failure; logname=mark X UNIX_LGN_MARK 43 session opened for user root by mark # These use user function 1 rather than an A:X pair. The processing # script the events are passed to will be expected to take user customised # actions for theuser function. This allows you to create a security # autit etc. 1 UNIX_LOGON 43 authentication failure; logname= 1 UNIX_LOGON 43 session opened for user |
To get thislog parser tool working takes a bit of piping, so this example of running it to tail /var/log/messages if provided to hopefully explain it a little better. The command can be typed into one line but I have broken this into a three line continuation to better explain whats happening below.
tail -f /var/log/messages \ | ./log_parser samples/log_parser.sample_config \ | samples/log_parser_processor.sh |
What happens in the sample is
|
I would suggest you review the contents of the file samples/log_parser_processor.sh to see how it works and pherhaps customise to your sites requirements. Basically it just loops reading standard input and feeds each message recieved through a case statement looking for the keyword put out by the log parser to decide what action to take with the key and message text also provided by the log parser output line.
Note: to stop the log parser tasks simply stop the tail task,
the others will detect eof and stop themselves.
You may customise
startup/shutdown scripts for log parsing tasks based on the sample init.d
script provided (discussed below) which records the pid of the tail task
started when used with "start" and stops the tail task when used with "stop".
The demonstration script for this utility is designed to work with the scripts provided with my alert toolkit, so may not be available if you have not installed the full application.
The sample script provided to stop, start, and do usefull
work against /var/log/messages is
samples/monitor_var_log_control.sh [ start | stop ].
This script issues a background command to start tailing the log /var/adm/messages, pipes to the log_parser which will process the messages based on the default rule file provided. Any output from rule matches is piped to the script log_parser_processor.sh which will raise or cancel alerts as appropriate.
This script should also be used to stop the task it starts as it recirds the pid of the tail task so can stop everything cleanly.
requires:
log_parser samples/log_parser.sample_config samples/log_parser_processor.sh scripts/raise_alert.sh scripts/end_alert.sh |
Important Note: As most systems roll over the messages file on a regular basis you will need to put this script in the job that rolls the log files to do a stop/start of the log parsing tasks when the logs are rolled so that the log parsing tasks start reading from the new file. If you do not do this the tail task will quite happily sit at eof of the old file forever as tail doesn't care/detect if a file is not being written to anymore.