This documentation is for
Version 0.19 of the processing script
Version 0.19 of the data collection script
Last updated 05 October 2021

Index

  1. Security Configuration Toolkit Overview
  2. Performing a server data capture
  3. Processing the data
  4. Viewing the security report output
  5. Appendix A - Reporting Override Custom File Syntax
  6. Appendix B - Customisation File settings available
  7. Appendix C - Quick Start Guide (if you don't read manuals)
  8. Appendix D - Known Complications with using this toolkit


[Index] [Next Section]

1. Security Configuration Toolkit Overview

1.1 Requirements, packages needed

1.2 Requirements, environment needed

1.3 Toolkit Overview

This toolkit does basic security checks of a Linux server. It is comprised of two parts, a data collection script to run on each server, and a processing script to process the collected files on a central reporting server. The data collection script must be run as root on the servers to ensure it has access to all the files it is checking. The processing script should not be run as root.

As written it will only work on Linux servers. I have no plans to make the code generic. I use it for checking Fedora and CentOS7 servers, although I have also run it for a 'kali' server which I believe is debian based.

It is designed for processing results from multiple servers, written as I have multiple servers and they were all starting to digress from ideal situations. Using this I can keep them all in a reasonably similar secure state (which in many cases requires chmod'ing packages installed from repositories as some of them have terribly insecure file permissions).
I do not believe products such as puppet/chef/cfengine are suitable for keeping file systems secure as while they can manage products/classes they know about they cannot manage all the wild things users can get up to outside those parameters. This toolkit is designed to capture everything, especially the applications and badly secured files users may put in system directotries outside managed infrastructure.

The reports produced are in HTML format. A master index summarises violations across all servers, and provides links down to the details for each server.

It checks for

The checklist above is vastly simplified, for example the ftp allowed users not only warns which users are allowed to use ftp but will report on any denied ftp users that are no longer on the system so you can clean up your ftpusers file, the user checks xref the passwd file against the shadow file and report on inconsistencies that need pwconv to fix etc.

The easiest way to see what it does is to run it.

It allows customisation of configurable rules on a global, or per server basis.

Basic file structure of the toolkit is
  ../bin
  ../custom
  ../doc
  ../results  

Additional note: as shipped the collection script tars up /etc and takes rpm package snapshots. Thats because under this toolkit is becoming part of a global configuration toolkit in which security is but a small part. Feel free to comment out the tar of /etc and recording of rpm packages as they are not processed by the security reporter components.
The collection script also captures hardware details if the required packages to do so are installed on each server which are available to view for each server from a link on each servers report overview page.

A critical note for the security concious is that the data collection script captures your /etc/passwd and /etc/shadow contents; so make sure the processing server you copy the data to in order for it to be processed is in a secure zone.

In normal operation the processing script will rebuild the entire results directories and contents from scratch by processing every server there are collected data files for.
It is possible to run processing against a single server and merge the new results into the full report structure but there are considerations to be aware of when doing so which are discussed in a later section, additional servers may be processed if needed.


[Previous Section] [Index] [Next Section]

2. Performing a server data capture

Obviously there needs to be data to process to produce a report. So how to we obtain it ?.

Capturing data for a server

Obtaining the data to be processed is simple to achieve, simply...

  1. copy the bin/collect_server_details.sh script to each server
  2. as the root user (to gain access to files being recorded), run it
  3. copy the *.txt and *.tar file produced in your current directory to your reporting servers incoming data directory.
You should try to automate the running of the data capture so it runs at regular intervals. Then on the reporting server you can setup a job to pull the data from each remote server on regular intervals and produce a refreshed report.

Limiting the data captured

Why would you want to ?. Well it collects a lot of data which results in a long processing time. I do a full scan infrequently monthly with a smaller scan regularly.

By default the script will record all file permissions under system (and selected user) directories. This can result in well over 50,000 file permissions to be checked (I believe the top I hit was around 170,000 files), and result in an extremely long processing time per server.

This should be allowed at least the first time you run it. I was supprised at how many bad (orphaned) files it found on my server after I cleaned up the passwd file.

After the first run, you may for future runs just want to chain down N levels of directories in the permission check searches. To do this provide the parameter '--scanlevel=' to select the number of directory levels that will be descended during the file permission search. It is important to note that overriding the scanlevel only affects file permision checks, checks for suid files will always traverse full file paths as these must be reported on.

Normal syntax          : bin/collect_server_details.sh
Limiting capture syntax: bin/collect_server_details.sh --scanlevel=n
Example                : bin/collect_server_details.sh --scanlevel=5

All collection parameters explained

These are all the parameters available to the collection script.

bin/collect_server_details.sh [--scanlevel=] [--backup-etc=yes|no] [--record-packages=yes|no] [--hwlist=yes|no] [--webpathlist=filename]

[Previous Section] [Index] [Next Section]

3. Processing the data

Running the reporting script

The four required steps to produce a security report are

  1. Collecting the data to be processed, discussed above
  2. Processing the data
  3. Automating report archiving
  4. Viewing the reports produced

Often you will not want to re-process all servers just to test changes made to one, and for performance reasons you are likely to want to process only a few servers a say; so the following processing options are also available.

Collecting the data to be processed

This was covered in the earlier section. It is mentioned again here to ensure you understand that all servers should have had the capture script run on them, and the data transferred to the reporting server before processing of any data is performed.

You should have run the data capture script discussed earlier on each server you intend to produce a report for. The results of the data collection from all servers need to be collected together in one directory on your reporting server.

This is because the script is designed to produce a consolidated report of all the servers from a single processing run, providing a global summary page for allservers and drill down to each server for details of the alerts or warnings produced; so all data collected must be processed on a single reporting server.

Processing the data

To process the data and produce the security reports you just run the script bin/process_server_details and provide it with the directory name of your server data files. The syntax of the process_server_details.sh script is one of the below


process_server_details.sh --datadir=directory [--archivedir=directory] [--oneserver=servername]
process_server_details.sh --datadir=directory [--archivedir=directory] [--checkchanged=list|process]
process_server_details.sh --datadir=directory --clearlock
process_server_details.sh --datadir=directory --indexonly [--indexkernel=yes|no]

Example: .......
assuming your data is in the rawdatafiles folder under the application directory, and you are in the application directory...
bin/process_server_details.sh --datadir=rawdatafiles
When completed the results/index.html file will have an overall scoreboard for each server and links to each servers detailed reports.

I M P O R T A N T :
Don't try to process files in a directory where the directory contains the underscore '_' character. The script will not work. This is caused by a lot of character translation fiddling; I don't intend to try to fix it in the short term, do don't store files to be processed in directories containing an underscore character.

Automating report archiving

This was discussed in the previous section. If you provide a parameter option --archivedir=dirname to process_server_details.sh then an archive file of the current report processing is automatically created in that directory for you at the end of the processing run.

Processing only a single server

If you have many servers, and only wish to perform the scan again on that one servers data after you have made changes that is now possible, but there are special considerations to take into account.

The parameter option --oneserver=servername can be used to specify a specific server to be re-processed but it has the following conditions due to the main index needing to be rebuilt.

As a result of these conditions a request to re-process a single server after upgrading the processing script will result in all other servers neededing to be reprocessed as well; however if you have upgraded the processing script you should have done this anyway.

However if you are re-processing a single server under the same version of the processing script as all other servers were processed under then only the single server specified will be reprocessed.

Processing only updated servers

The ability exists (since version 0.08) to request processing only for changed/updated server datafiles.

If you have many servers you may regularly copy new collected datafiles to the processing server on an ad-hoc or scheduled basis, but do not want the delay of using a full processing run of a lot of single server processing runs to process the updated files.

Since version 0.08 it is possible to use the '--checkchanged=list' and '--checkchanged=process' options to list or process all servers with collected datafiles that identify themselves as being snapshoted after the server was last processed.

Unlike the --oneserver option this option assumes all other servers were processed using the current version of the processing script and will not force re-processing of servers with obsolete results.

Overiding/Customising the default reporting

The default tight security checks may not suit your environment, to allow for this there is the ability to override the default checks performed by the processing script. This is done by using customisation files, so you do not need to go to the effort of modifying the scripts themselves.

The overrides may be set at a global site level or at an individual server level depending upon your requirements. There are two types of override files

There are only limited things that can be overridden, as your system is either secure or not. The overrides allowed are for things that may not necessarily make it insecure such as you wishing an application to listen on a tcp-ip port as OK which would be a specific server override, or a shared home directory to be considered a 'shared' directory, such as halt and bin both having a home directory of bin so ownership of the directory should not be halt or bin but root, and it shouldn't be 700 or nobody could get to it.

I M P O R T A N T :
If you create a server specific customisation file be aware that it replaces any site wide file (replaces, not merged with). As such if you create a server specific custom file ensure you copy any values you put in the global site custom file into the server specific ones if required.

As custom files are a topic into themselves the use of, and values available in, the custom files are covered in the Appendix A section.

Processing Performance tips

As my list of servers has grown I have needed to find ways of processing servers without having to process all servers in each processing run.

The --checkchanged=process|list option was introduced to allow staggered processing (as long as the processing versions do not change) so you can stagger collection of datafiles from servers to only process a few each day rather than have to re-process all servers everyday.

The --indexonly=yes option was origionally added to permit processing to be carried out on multiple servers, with the results (and raw datafiles) being able to be copied to a single master results server where the --indexonly=yes option will be able to merge all the results for a consolidated view.


[Previous Section] [Index] [Next Section]

4. Viewing the security report output

The reports are produced in html format. These are in the results directory under the application directory.

Use a web browser to open the index.html file in that directory. The initial index file contains a scorecard summary of total alerts and warnings for every server processed so you can quickly identify problems, and links to each individual servers more detailed scorecards and reports to see exactly what the alerts or warnings for each server were caused by.


[Previous Section] [Index] [Next Section]

5. Appendix A - Reporting Override Custom File Syntax

USING CUSTOM FILES

1. Why Custom Files

There will be occasions where the defaults used by the processing script are not adequate or realistic for one of more servers being procesed. To allow for that, rather than you having to edit the script, you are permitted to create customised overide files.

There are two types of override files

There are only limited things that can be overridden, as you system is either secure or not. The overrides allowed are for things that may not necessarily make it insecure such as you wishing an application to listen on a tcp-ip port as OK which would be a specific server override, or a shared home directory to be considered a 'shared' directory, such as halt and bin both having a home directory of bin so ownership of the directory should not be halt or bin but root, and it shouldn't be 700 or nobody could get to it.

What can be overridden using customisation files is covered in this appendix.

I M P O R T A N T :
A server specific custom file is not merged with the default ALL.custom file, it replaces it. As such if you create a server specific custom file ensure you copy any values you put in the global custom file into the server specific ones if required.

I M P O R T A N T :
The custom files must be in the directory 'custom' in the root of this toolkit. As shipped this is where the supplied ALL.custom file lives so you have an example tree already setup. The expected tree structure is...

   /something/something/bin
   /something/something/custom
   /something/something/doc
   /something/something/results
...all script references to custom files are relative to the bin directory (../custom/*) so please keep this directory structure.

2. Network Port Overrides

TCP-IP ports are a gateway to your system. You should know what is listening on them. If a port is not defined in the custom file it will be considered an unexpected open port and will be considered critical.

In the reports, even if a TCP-IP port is allowed you will still get a warning be default if the port is listening on all interfaces, you can only get an OK when it is more securely bound to a specific interface or address.

Overrides are available for tcp, tcp6, udp, udp4, raw and raw6 ports. An override must be correct for the protocol type, an allow line for a tcp6 port will not affect alerting behavior on any other port type, this allows for the unlikely case where one application may use tcp port NNN and another application use tcp6 port NNN.

The syntax of network port overrides are (where <version> is either 4 or 6) line is...
TCP_PORTV<version>_ALLOWED=:portnum:description[:WILD]
UDP_PORTV<version>_ALLOWED=:portnum:description[:WILD]
RAW_PORTV<version>_ALLOWED=:portnum:description[:WILD]

For example...
TCP_PORTV4_ALLOWED=:22:ssh TCP_PORTV6_ALLOWED=:9090:cockpit

If the optional :WILD is appended after the description it will be considered OK for the specified port to be listening on all interfaces (on 0.0.0.0 or :::), but this should be used only if your site has a specific reason for an application to listen on all interfaces, you should configure your application to only listen on the interfaces it needs rather than on all interfaces.

3. Network Overrides by process

Unfortunately some applications use randomly assigned network ports and as such cannot be configured using the explicit port paramaters above. An example of such a process would be rpcbind.

In order to allow you to downgrade these from critical alerts by defining the processes that can use random ports.

Using these parameters will still result in warnings if the port is listening on all interfaces as that is insecure, and there is no ability provided to permit listening on all interfaces to be permitted using this method. If the port is listening on specific interfaces no alerts will be raised but the report will show alerts suppressed this way in a different colour to indicate they are still considered as insecure.

The value passed in this parameter must be the full details of the running process as reported by 'ps -ef' as we want as exact match as possible.

The syntax of network port overrides by process name are (where <version> is either 4 or 6) line is...
NETWORK_TCPV<version>_PROCESS_ALLOW=/program/name and parameters
NETWORK_UDPV<version>_PROCESS_ALLOW=/program/name and parameters
NETWORK_RRAW<version>_PROCESS_ALLOW=/program/name and parameters

Examples:
NETWORK_UDPV4_PROCESS_ALLOW=avahi-daemon: running [phoenix.local]
NETWORK_UDPV6_PROCESS_ALLOW=avahi-daemon: running [phoenix.local]
NETWORK_UDPV4_PROCESS_ALLOW=/sbin/rpcbind -w
NETWORK_UDPV6_PROCESS_ALLOW=/sbin/rpcbind -w

4. Home Directory Ownership Overrides

This override is promarily for home directory checks. As you are aware many of the system users share a home directory, and by default the report alerts on any home directory not owned by the correct user.
This override is only for system directories, it will enable you to to flag as OK a directory that is owned by root rather than the expected owner.

The syntax of a directory ownership override is...
ALLOW_OWNER_ROOT=dirname

For example...
ALLOW_OWNER_ROOT=bin
(You cannot provide any leading / of leading directory path, the dirname is the basename of the directory being checked. be carefull in it's use as it will treat /bin and /usr/tmp/bin for example with the same rule and allow the root owner (which is probably OK for system directories)).

Note: home directories must be owned by the user defined in the /etc/passwd file as having the home directory, or owned by root in the case of shared directories (as defined by using this override). Anything else is a security concern so cannot be overridden.

5. Home Directory Permission Overrides

This is intended primarily for home directory checks. Any home directory not secured 700 or tighter (d***------) is a security risk as home directories should be secure.

5.1 General System Directory Overrides

System home directories however have a requirement that other users can get into them, to run programs and access configuration files.

To allow for this during the checks it is possible to flag a user home directory directory as a system directory, in which case any of the permissions drwxr-xr-x, drwxr-x--x, drwx--x--x, dr-xr-xr-x or dr-xr-x--- will be acceptable for it if it is owned by a system user.

To request a directory be treated as a system directory the format of the custom file entry is...
ALLOW_DIRPERM_SYSTEM=dirname

For example...
ALLOW_DIRPERM_SYSTEM=bin

Occasionally you may install software packages under a different userid to allow them to run as a non-root userid, in which case all files should be owned by that non-root user. To avoid those being reported as being owned by a non-system user you can in the custom file for the server define additional system file owners. For example to allow jetty to own system files...
ADD_SYSTEM_FILE_OWNER=jetty

5.2 Explicit Directory Overrides

There are some directories (very few) that just cannot fit into the general system category. I have only found one (mail-which has the sticky bit set for mail group). As a general rule thats OK for a directory, so this override option is specifically for directory overrides.

This override should be used with caution as it is used as a catch-all of last resort for directories not meeting all other rules, and is done by directory name (not full path) so will match all directories of the name.

To force a directory override to use an explicit permission value (which will only be checked if all other checks fail) use the custom file entry...
ALLOW_DIRPERM_EXPLICIT=dirname fullperms

For example...
ALLOW_DIRPERM_EXPLICIT=mail drwxrws--x

6. System File Checks

6.1 Forcing a system file into OK state

There shouldn't be any files you need to do this for, but unfortunately there will always be exceptions, for example /usr/games/Maelstrom/Maelstrom-Scores on my development server needs world write access as I don't want users to be root to run a game (yes I allow games). Another seems to be /var/spool/at/.SEQ although this only appears on one of my servers (I don't use at, but obviously have at some point).

To stop the report constantly throwing up alerts for badly secured system files you know is badly secured, but wish to keep the way it is; you can add an entry to the custom file to force the file to be treated as OK.

To do so use the FORCE_PERM_OK entry with the filename, including the full path, a colon, and the expected permissions as the value. The game score file mentioned above is used in this example...
FORCE_PERM_OK=/usr/games/Maelstrom/Maelstrom-Scores:-rw-rx-rx-

And for bad owners I have also added a FORCE_OWNER_OK tag, with my upgrade from FC3 to FC6 a single system file was owned by vcsa rather than a system user, for one file I was not willing to add vcsa to the system file owner list so create the additional tag (which alas really slows the checking down some more as every badly owned file now does a check for the override (actually, doesn't slow much on my system as I only have that one file, time to clean up yours :-) ).
FORCE_OWNER_OK=/usr/libexec/mc/cons.saver:vcsa

6.2 Managing "sloppy" /var checking

As the /var filesystem can, and often is, used by every user (/var/tmp anyway) you will often find many files under /var secured incorrectly.

It is possible to handle these in the report with the configuration file values below (note: WARN is the default).


  ALLOW_SLOPPY_VAR=OK    
  ALLOW_SLOPPY_VAR=WARN  
  ALLOW_SLOPPY_VAR=NO    

The actions taken are (which may be overridden for some files if other options affecting /var are used)
OK - report only a summary line of the total count of badly owned and badly secured files found
WARN - report the details of each insecure file found, report them as warnings
NO - report the details of each insecure file found, report them as alerts

By default the setting is WARN so you are aware of all the issues but don't have a horrendously high critical alert total.

It should also be noted the following parameter can be used to suppress many of the alerts also.

6.3 Permitting files under /var to be group writeable

The default umask on most systems is for new files created to be writeable by owner and group. This is normally not an issue other than this security checking toolkit alerting on such files under /var, files created by users such as mysql, apache, clamav, bacula can cause this toolkit to generate hundreds of alerts when it checks files under /var.

This option will consider it valid for files to be group writeable as long as


ALLOW_VAR_FILE_GROUPWRITE=YES

If any value other than YES is provided then NO is assumed.

7. Turning off warnings for manual check items

There are some checks that always by default raise a warning alert, as these require manual checking. The warning alert is raised to highlight that a manual check must be performed to ensure that eveything is OK.

It is possible to override two of these, the two most unlikely to be a security risk. These are the

These can be turned of by including in the custom file used for a server the entries below


  NOWARN_ON_CUSTOMFILE=YES      
  NOWARN_ON_MANUALLOGCHECK=YES  

It is probably inadvisable to use the overrides to turn these off as manual reviews should be performed, but as I personally are happy with my settings I use these to keep the results boxes green rather than yellow.

8. Turning off alerts for known/ok SUID files

Files with the suid bits set are possible security risks, however most servers require some files with these bits set to function (ksu, suexec etc need them set). To avoid these raising alerts each time you run the check you have the option of specifying files that are permitted to have suid bits set for each server.

These can be turned of by including in the custom file used for a server the entry below


  SUID_ALLOW=/path/.../filename

Note that in this custom file entry the full path and filename must be provided rather than just the filename part, for obvious reasons; don't want users having their own versions of programs that need suid bits set.

It is also important to note that any SUID_ALLOW entries in a custom file that refer to files that no longer exist on the system being checked will generate warnings requesting you to remove obsolete entries from the custom file.


[Previous Section] [Index] [Next Section]

6. Appendix B - Customisation File settings available

Sample customisation files are supplied in the tarball that contains the application. These are in the directory custom.

Filenames must be named servername.custom if you create custom files on a per-server basis. There should alway be a ALL.custom file to provide defaults for all servers that do not have a specific custom file although that is also optional.

The search for customisations is

  1. if a servername.custom exists for a servername use that
  2. if no servername.custom exists for a server use the default ALL.custom if it exists
  3. if no customisation file exists the extremely tight processing defaults will be used

It is important to note custom files are not merged, if you have a servername.custom file everything in ALL,custom will be ignored. This is a deliberate design decision, your ALL.custom may accomodate most of your servers but if you have one you want more tightly secured the last thing you would want is less strict settings fom ALL.custom being merged with you servername.custom settings.

The list of parameters available is not in alpabetical order, but quite jumbled. I will get around to updating that one day.

TCP_PORTV4_ALLOWED=:port:description[:WILD] Used to specify a TCPV4 port that is expected to be opened on the server. There must be a : before and after the port number, anything after the second : is a free form description to describe the use of the port although the description cannot contain the : character.
Examples
TCP_PORTV4_ALLOWED=:22:ssh server

Note that this will stop a port being raised as a critical issue in the report, but if the port is listening on all interfaces it will still be raised as a warning unless the optional :WILD parameter is appended to indicate it can listen on all interfaces.
The optional :WILD option must only be used if you have a site specific reason for an application to listen on all interfaces, it is prefered that the application be configured in a more secure manner. If used while it will not be treated as an alert the report will highlight it in a different color to indicate that it is considered insecure.
TCP_PORTV6_ALLOWED=:port:description[:WILD] Used to specify a TCPV6 port that is expected to be opened on the server. There must be a : before and after the port number, anything after the second : is a free form description to describe the use of the port although the description cannot contain the : character.
Examples
TCP_PORTV6_ALLOWED=:9090:Cockpit
Note that this will stop a port being raised as a critical issue in the report, but if the port is listening on all interfaces it will still be raised as a warning unless the optional :WILD parameter is appended to indicate it can listen on all interfaces.
The optional :WILD option must only be used if you have a site specific reason for an application to listen on all interfaces, it is prefered that the application be configured in a more secure manner. If used while it will not be treated as an alert the report will highlight it in a different color to indicate that it is considered insecure.
UDP_PORTV4_ALLOWED=:port:description[:WILD] Used to specify a UDPV4 port that is expected to be opened on the server. There must be a : before and after the port number, anything after the second : is a free form description to describe the use of the port although the description cannot contain the : character.
Examples
UDP_PORTV4_ALLOWED=:53:dnsmasq
Note that this will stop a port being raised as a critical issue in the report, but if the port is listening on all interfaces it will still be raised as a warning unless the optional :WILD parameter is appended to indicate it can listen on all interfaces.
The optional :WILD option must only be used if you have a site specific reason for an application to listen on all interfaces, it is prefered that the application be configured in a more secure manner. If used while it will not be treated as an alert the report will highlight it in a different color to indicate that it is considered insecure.
UDP_PORTV6_ALLOWED=:port:description[:WILD] Used to specify a UDPV6 port that is expected to be opened on the server. There must be a : before and after the port number, anything after the second : is a free form description to describe the use of the port although the description cannot contain the : character.
Examples
UDP_PORTV6_ALLOWED=:53:dnsmasq
Note that this will stop a port being raised as a critical issue in the report, but if the port is listening on all interfaces it will still be raised as a warning unless the optional :WILD parameter is appended to indicate it can listen on all interfaces.
The optional :WILD option must only be used if you have a site specific reason for an application to listen on all interfaces, it is prefered that the application be configured in a more secure manner. If used while it will not be treated as an alert the report will highlight it in a different color to indicate that it is considered insecure.
NETWORK_TCPV4_PROCESS_ALLOW=full process details as shown by 'ps' This parameter can be used in the case where a known process is permitted to use a TCPV4 port but the port cannot be explicitly defined using the above entries because the process uses random port numbers.
While if used it will not be treated as an critical alert, but if it listens on a specific interface the report will highlight it in a different color to indicate that it is considered insecure. If the port listens on all interfaces it will still raise a warning alert.
NETWORK_TCPV6_PROCESS_ALLOW=full process details as shown by 'ps' This parameter can be used in the case where a known process is permitted to use a TCPV6 port but the port cannot be explicitly defined using the above entries because the process uses random port numbers.
While if used it will not be treated as an critical alert, but if it listens on a specific interface the report will highlight it in a different color to indicate that it is considered insecure. If the port listens on all interfaces it will still raise a warning alert.
NETWORK_UDPV4_PROCESS_ALLOW=full process details as shown by 'ps' This parameter can be used in the case where a known process is permitted to use a UDPV4 port but the port cannot be explicitly defined using the above entries because the process uses random port numbers.
Example
NETWORK_UDPV4_PROCESS_ALLOW=/usr/bin/rpcbind -w -f
While if used it will not be treated as an critical alert, but if it listens on a specific interface the report will highlight it in a different color to indicate that it is considered insecure. If the port listens on all interfaces it will still raise a warning alert.
NETWORK_UDPV6_PROCESS_ALLOW=full process details as shown by 'ps' This parameter can be used in the case where a known process is permitted to use a UDPV6 port but the port cannot be explicitly defined using the above entries because the process uses random port numbers.
Example
NETWORK_UDPV6_PROCESS_ALLOW=/usr/bin/rpcbind -w -f
While if used it will not be treated as an critical alert, but if it listens on a specific interface the report will highlight it in a different color to indicate that it is considered insecure. If the port listens on all interfaces it will still raise a warning alert.
NETWORK_RAWV4_PROCESS_ALLOW=full process details as shown by 'ps' This parameter can be used in the case where a known process is permitted to use a RAWV4 port but the port cannot be explicitly defined using the above entries because the process uses random port numbers.
Example
NETWORK_RAWV4_PROCESS_ALLOW=/usr/bin/rpcbind -w -f
While if used it will not be treated as an critical alert, but if it listens on a specific interface the report will highlight it in a different color to indicate that it is considered insecure. If the port listens on all interfaces it will still raise a warning alert.
NETWORK_RAWV6_PROCESS_ALLOW=full process details as shown by 'ps' This parameter can be used in the case where a known process is permitted to use a RAWV6 port but the port cannot be explicitly defined using the above entries because the process uses random port numbers.
Example
NETWORK_RAWV6_PROCESS_ALLOW=/usr/bin/rpcbind -w -f
While if used it will not be treated as an critical alert, but if it listens on a specific interface the report will highlight it in a different color to indicate that it is considered insecure. If the port listens on all interfaces it will still raise a warning alert.
TCP_OUTBOUND_SUPPRESS=:port:description This is used only in the firewall rule checking section and only if there is no matching port open on the server. Depending on how paranoid your server firewall rules are there will be accept rules for outbount traffic (where no port is open on the local server as the target is a remote server or network). This parameter is used to suppress an alert for a firewall accept rule for a TCP port that is not open on the local server.
Example
TCP_OUTBOUND_SUPPRESS=:123:local network ntp servers
UDP_OUTBOUND_SUPPRESS=:port:description This is used only in the firewall rule checking section and only if there is no matching port open on the server. Depending on how paranoid your server firewall rules are there will be accept rules for outbount traffic (where no port is open on the local server as the target is a remote server or network). This parameter is used to suppress an alert for a firewall accept rule for a UDP port that is not open on the local server.
Example
UDP_OUTBOUND_SUPPRESS=:123:local network ntp servers
TCP_NETWORKMANAGER_FIREWALL_DOWNGRADE=:port:
UDP_NETWORKMANAGER_FIREWALL_DOWNGRADE=:port:
NetworkManager or Firewalls opens firewall ports for services that have not necessarily been manually configured by the site administrator. This parameter will allow downgrading of an alert to a warning if the following conditions are met; the port is not configured as an expected allowed port, nothing is listening on the port and NetworkManager or Firewalld are running on the server. This requirement is needed as I have servers not running firewalld where NetworkManager may (all I can think of) add ports to iptables that are not defined in any manually configured iptables rulesets, and on servers not running NetworkManager but using Firewalld where ports are opened in the firewall that are not defined to firewalld by the system administrator (they do not show in firewall-cmd queries on services or ports).
Before using this parameter on firewalld systems ensure you have checked all services and ports defined to firewalld and remove them from there if possible rather than downgrading the alert.
NETWORK_PORT_NOLISTENER_TCPV4_OK=port:optional description
NETWORK_PORT_NOLISTENER_TCPV6_OK=port:optional description
NETWORK_PORT_NOLISTENER_UDPV4_OK=port:optional description
NETWORK_PORT_NOLISTENER_UDPV6_OK=port:optional description
NETWORK_PORT_NOLISTENER_RAWV4_OK=port:optional description
NETWORK_PORT_NOLISTENER_RAWV6_OK=port:optional description
This parameter is used to downgrade alerts to warnings for ports that are expected to be listening and have firewall ports open for them but at the time the data was collected had no application listening on the port. This is useful for ports such as the X11 ssh forwarding port that is only active when a user has a ssh session to the host, and for vnc consoles for VMs where the VMs are not always running on the default host.
It is still raised as a warning, as ifd the port is open in the firewall any rouge user application can use the port which is undesirable.
ALLOW_OWNER_ROOT=directoryname Allows a directory that is expected to be owned by a user other than root to be accepted as OK for reporting if the directory is owned by the root user. This is needed as many system users have home directories of /bin, /sbin or /root (for example on Fedora31 users operator/shutdown/halt have a home directory of /root but the directory needs to be owned bu root not by users operator/shutdown/halt). It is important to note that the override will apply to any directory matching the name so use with caution (ie: if =bin is used then /bin, /usr/bin, /home/fred/bin, /rootkit/hacker/bin will be accepted as OK to be owned by root if not owned by the default owner).
Examples
ALLOW_OWNER_ROOT=bin
ALLOW_OWNER_ROOT=sbin
ALLOW_OWNER_ROOT=root
ALLOW_DIRPERM_SYSTEM=directoryname This can be used to override alerts for user home directories that need to have shared (read) access, obviously direcories such as /bin and /sbin that are configured as some user home directories but need to permit other users to traverse down them so this configuration option can be used for user home directories directories such as those; user directories serving 'public_html' pages is another example that would need this override. It will allow directories secured as drwxr-xr-x, drwxr-x--x, drwx--x--x, dr-xr-xr-x, dr-xr-x--- or drwxr-x--- to be accepted as OK.
Examples
ALLOW_DIRPERM_SYSTEM=bin
ALLOW_DIRPERM_SYSTEM=sbin
ALLOW_DIRPERM_SYSTEM=adm
It is important to note that the override will apply to any directory matching the name so use with caution (ie: if =bin is used then /bin, /usr/bin, /home/fred/bin, /rootkit/hacker/bin will be accepted as OK if they are secured with any of the three masks mentioned above.
ALLOW_DIRPERM_EXPLICIT=directoryname permissions This is used to explicitly define permissions that can be set on a directory if the existing permissions would be considered a critical condition. There are some system directories and home directories that will be secured differently to recomended settings and these can be set here. The last two example entries are for Fedora systems where systrem user account home directories are to symbolic links instead of a real directory
Examples
ALLOW_DIRPERM_EXPLICIT=mail drwxrwxr-x
ALLOW_DIRPERM_EXPLICIT=named drwxr-x---
ALLOW_DIRPERM_EXPLICIT=gdm drwxrwx--T
ALLOW_DIRPERM_EXPLICIT=sshd drwx--x--x
ALLOW_DIRPERM_EXPLICIT=avahi-autoipd drwxrwx--T
ALLOW_DIRPERM_EXPLICIT=bin lrwxrwxrwx
ALLOW_DIRPERM_EXPLICIT=sbin lrwxrwxrwx
It is important to note that the override will apply to any directory matching the name so use with caution (ie: if =bin is used then /bin, /usr/bin, /home/fred/bin, /rootkit/hacker/bin will be accepted as OK if they match the directory permissions specified.
FORCE_PERM_OK=fullpathandfilename:expectedperms Ocassionally there will be a file that alerts that does not fit any generic rules, this custom file setting can be used to force any checks against the file permissions of a file to be considered OK reguardless of what the file permissions actually are. Examples
FORCE_PERM_OK=/usr/libexec/mc/cons.saver:-rw-rw-rw-
Unlike other overrides this is specific to one file and the full path and filename must be specified, plus the expected permissions seperated by a colon. It is expected there would be very few of these entries in a configuration file (I currentrly have zero).
FORCE_ANYFILE_OK=filename fileperms Allow a file of this name to be forced OK under any directory if it fails initial default checks. A risk as this check will be performed on any file matching the basename of the full file path, but Fedora now generates lots of dynamic PCI bus entries under the /sys/devices/pciNNNN:NN path as --w--w----. and we do not want them generating false alerts. Examples
FORCE_ANYFILE_OK=remove --w--w----
FORCE_ANYFILE_OK=rescan --w--w----
The risk is minimised by the override requiring the file permissions that are expected to be explicitly provided. Also this parameter is only used if the filename being checked has already failed all previous checks, this is the last check in the chain.
FORCE_OWNER_OK=fullpathandfilename:expectedowner Ocassionally there will be a file that alerts that does not fit any generic rules, this custom file setting can be used to force any checks against the file owner of a file to be considered OK as long as the owner of the file actually matches the expectedowner provided.
Examples
FORCE_OWNER_OK=/usr/libexec/mc/cons.saver:vcsa
Unlike other overrides this is specific to one file and the full path and filename must be specified, it is expected there would be very few of these entries in a configuration file (I currentrly have zero).
SUID_ALLOW=fullpathandfilename There will always be setuid files on a *nix system, and you should keep track of them. This setting is used to define every setuid file you expect to exist on the server being checked. Any setuid file on the server not defined by these entries will be reported as a critical issue.
Examples
SUID_ALLOW=/usr/sbin/sendmail.sendmail
SUID_ALLOW=/usr/sbin/userhelper
SUID_ALLOW=/usr/bin/lockfile
SUID_ALLOW=/usr/bin/sudo
SUID_ALLOW=/usr/bin/crontab
Unlike other overrides this is specific to one file and the full path and filename must be specified. It should also be noted that the report will alert on any entries defined this way where the file does not exist on the server to ensure you do not leave stale entries in this list.
SUID_SUPPRESS_DOCKER_OVERLAYS=yes On servers running docker containers a lot of SUID files used by the containers are placed in directories under /var/lib/docker/overlay2/containerid. As contaner ids are generated there is no way to hard code allowed filenames using the above parameter; so this parameter can be used to suppress all alerts for for SUID file if they are under the directory path /var/lib/docker/overlay2. If this parameter is used the file security page of the report will list the number of alerts suppressed for this reason and provide a link to all the files that had alerts suppressed. You are responsible for reviewing this list of suppressed files if you suppress the alerts.
SUID_SUPPRESS_SNAP_OVERLAYS=yes On Ubuntu servers (at least from release 20.04 onward) many applications are installed as SNAP packages, these contain copies of filesystem files including SUID set files under directories such as /snap/snapd, /snap/core, /snap.core18, /snap/core*) which will alert as unexpected SUID files, as they are in pretty much completely randonly names directories under the paths mentioned they cannot be coded in custom files as expected SUID files. This parameter can be used to suppress individual alerts for those files and instead generate a single alert. If this parameter is used the file security page of the report will list the number of alerts suppressed for this reason and provide a link to all the files that had alerts suppressed. You are responsible for reviewing this list of suppressed files if you suppress the alerts.
SNAP files are dangerous, for example '/snap/core18/1885/bin/su -' works perfectly well even if the system /bin/su command has been locked down; and users can install SNAP packages into their personal directories as well widening the hole; so this parameter will suppress 100s of alerts but always raise one as SNAP packages are dangerous.
ADD_SYSTEM_FILE_OWNER=username You will occasionally install 3rd party packages that you want to run under a user other than root, in which case as good practise you would chown all files in that package to the new owner. To avoid the report producing critical alerts for those products you may install in system directories such as /usr/local or /opt it is possible to use the configuration file to define additional users that are permitted to own system files.
Examples
ADD_SYSTEM_FILE_OWNER=jetty
ADD_SYSTEM_FILE_OWNER=snort
ADD_SYSTEM_FILE_OWNER=logcheck
On the file permissions check page of the report all 'system file owners' are listed for the server being checked so you can keep an eye on additional entries.
ADD_WEBSERVER_FILE_OWNER=username This is only used if the server data file was produced using the collector --webpathlist option to specifically identify web server directories that must only contain read-only files. It allows the specification of users that are expected to be able to own those files.
Examples
ADD_WEBSERVER_FILE_OWNER=apache
ADD_WEBSERVER_FILE_OWNER=jetty
If no entry exists and there are webserver files to be checked a default owner of apache will be the default.
WEBSERVER_FILE_ALLOW_WRITE_SUFFIX=.suffix: This is only used if the server data file was produced using the collector --webpathlist option to specifically identify web server directories that must only contain read-only files.
The parameter can be used to identify file suffixes than can be writeable under that directory path. Example
WEBSERVER_FILE_ALLOW_WRITE_SUFFIX=.log:
This should ideally never be required, writeable files should be kept in directory paths outside the webserver root directories.
WEBSERVER_FILE_ALLOW_WRITE_EXACT=/full/path/file: This is only used if the server data file was produced using the collector --webpathlist option to specifically identify web server directories that must only contain read-only files.
Can be used to suppress an alert for a specific webserver file by explicitly naming a file than can be writeable. The trailing : is required to prevent false matches such as xxx.log matching xxx.log.1
ALLOW_SLOPPY_VAR=WARN Setting this alters the report to only warn for /var file security failures, this cuts out a lot of critical alerts as a lot of stuff in /var is owned by many different userids.
ALLOW_VAR_FILE_GROUPWRITE=YES A lot of files under /var tend to be group writeable with the default umasks set with operating system installs, and in many cases there may be a need for it. Setting this value will make the checks consider files in directories under /var which are group writeable as secure as long as the group matches the user (ie: user mysql, group mysql) and the 'other' permissions are not writeable.
NOWARN_ON_MANUALLOGCHECK=YES There are a series of checks that have been identified as being needed to be performed manually. Warnings would normally be raised in the report for these but using this configuration file option will suppress those alerts in the report.
NOWARN_ON_CUSTOMFILE=YES By default is that if using a custom file to override the defaults a warning is issued. This custom file setting suppresses that waring.
As custom files are now the norm and a link from the main page of each servers results has a link to the custom file anyway this is depeciated and will probably be removed in the next release.
REFRESH_INTERVAL_EXPECTED=days By default the main index page displays a highlighted (alert colour) for the snapshot date field if the collected data for the snapshot is over 14 days old. This parameter allows you to override that default with a different number of days. For example if you have VMs that are only started as needed, or remote laptops that only connect to the local network occasionally you may want to increase the number of days before a data collection snapshot is considered obsolete.
EXACT_ALERT_REASON=description text Occasionally there will be alerts raised you just cannot fix, this parameter can be used multiple times to document each expected alert. This parameter is designed to acknowledge that fact and provide a entries for which the count of alerts displayed on the main index menu next to the server entry are considered 'expected' and if the number of actual alerts are equal to the number of alerts expected the total will show in green text with a (C) indicating a custom file override. Note this is the exact number of alerts expected not a less or equal check as if the alert count drops from the expected to something has changed and you need to investigate.
This should be used as a last resort as ideally you would correct all issues, but unfortunately if the toolkit is run on debian based systems currently there will always be at least one alert.
It must exactly match the alert text raised, and the possible exact matches will be shown in the expected alert report also.
This parameter will be ignored if there are more that 30 alerts for a server as (a) index processing time woul be too great, and (b) you should fix your problems rather than suppress them; this parameter is to be used when you have only a couple of alerts you require or do not intend to fix
EXACT_ALERT_REASON_NOTES=freeform text This can be used muktiple times in a custom file to provide notes on why an alert is expected rather than being fixed
HOMEDIR_MISSING_OK=userid: On Fedora and CentOS (and presumably Redhat) operating systems a lot of users are created with non-existant home directories; for example cockpit-ws and cockpit-wsinstance user have a home directory of '/nonexisting', tss has a home directory of '/dev/null', and there are many other examples. Each user that has a home directory that does not exist generates an increment to the warning count by default.
Using this parameter will suppress the warning count being incremented This suppressdion is done on a per user basis so new users/packages added can be easily identified. The entry will still be listed in the report but in a non-warning colour and no warning counter incremented.
SSHD_SUBSYSTEM_ALLOW=subsysname:fullsubsyscommand: Example SSHD_SUBSYSTEM_ALLOW=sftp:/usr/libexec/openssh/sftp-server:
This parameter was added for servers that are managed by ansible, and as such reguire sftp to be enabled. It can however be used for any subsystem enabled by the sshd_config file. If used the alert raised for the specified subsystem is downgraded to a warning.
SERVER_IS_ANSIBLE_NODE=yes This parameter is only used if the SSHD_SUBSYSTEM_ALLOW parameter has been used for the sftp subsystem. If used it identified the server as being managed by ansible and as such must have sftp running, in this case the warning for the sftp subsystem is altered to OK.
ALLOW_AUTHORISED_KEYS=yes By default users with authorized_keys files are warnings, using this parameter indicates it is OK fir users to have those files and suppresses the warnings.
This does not affect the 'root' user as an alert will always be raised if root has ssh keys; that should never be allowed.
SUDOERS_ALLOW_ALL_SERVERS=value
SUDOERS_ALLOW_ALL_COMMANDS=value
In all cases 'value' is the user or %group the sudoers rule is for.
These paramaters can be used to downgrade alert severity levels with the following rules
* if a rule is for all servers and all commands both parameters must be used to downgrade alert to warning
* if a rule is for all servers but restricted commands allow_all_servers can downgrade from warning to OK
* if a rule is for a specific server and all commands can downgrade from alert to warning with allow_all_commands
At no time will a rule allowing all commands be downgraded below a warning as it is a risk
INCLUDE_CUSTOM_RULES=filename Include a custom rules file. These are intended to be rules defining the network ports, file securites, new system owners etc. for an application. This allows rules for an application to be defined in one file and 'included' in individual server custom files rather than having to code the values for every server running the application.
Where include files are used the values in the individual server custom file will override values in the shared files in the cases where duplicate entries may result.
BLUETOOTH_ALERT_TO_WARN=yes Downgrade bluetooth activity in the network checks from alerts to warnings if bluetooth active sessions were found to be reported on
ALLOW_OWNER_OVERRIDE=dirname:realowner: Only used in home directory checks where a shared directory is owned by someone other than root. This was added for Debian11 where the sendmail director is a shared home directory not owned by root. The trailing : is required. Example of what is used in my 'include' common file for Debian11 is
ALLOW_OWNER_OVERRIDE=sendmail:smmta:
DOCKER_ORPHANS_SUPPRESS=YES On servers running docker containers the UID and GID of directories and files created are completely up to the creator of the container image. Any well designed application container will avoid at all costs any UID/GID that may already exist on the system. This does mean they show up as alerts in the orpjaned file report. Using this parameter in a customisation file will suppress alerts for any orphan files (files not owned by a existing system user) in docker container overlay directories

[Previous Section] [Index]

7. Appendix C - Quick Start Guide

This is the quick start guide for those who don't read manuals. It explains how to setup and run it immediately.

You will have to at some point refer to the documentation if you ever want to get down to a fairly error free run, or to get it to run in 5 minutes instead of 50 minutes.

1. Choose the central server
Choose a server to be the one to provide the html results, it should be running a web server such as apache. NOTE: A web server is not actually required as the output is in flat file format (no cgi-bin stuff), but the output is as html reports so the output must be places where a web browser can access it.
The server chosen must be secure, the data collection scripts collect sensitive information such as the contents of passwd and shadow files which you must not consolidate on an externally facing server. Personally I do the processing on a grunty internal server then just copy the results directory to a web server.
2. Install all the scripts
Untar the package on a central server
3. Copy data collector script to all linux servers
copy bin/collect_server_details.sh to every server
(if you have many servers you probably use something like puppet/chef so you should use as that will make it much easier to push out new versions to all servers)
4. Run a data scan on each server to be reported on
  • 4.1 Run Initial Scan (default/max scanlevel)
    • run "collect_server_details.sh" on each server copied to
    • scp/ftp the output .txt and .tar files created to a directory on the central server
  • 4.2 Frequent/Regular scans (override scanlevel) To be done only after you have fixed up all system file perm errors.
    • run "collect_server_details.sh --scanlevel=3" on each server copied to
    • scp/ftp the output .txt and .tar files created to a directory on the central server
5. Processing the output on the central server
Where dirname is the directory you scp/ftp the files to on the central server run "bin/process_server_details.sh dirname"
6. Review results
Results are accessable from results/index.html under the application directory.

[Previous Section] [Index]

8. Appendix D - Known Complications with using this toolkit

A customisation file per server is really required if you have multiple OS's. For example some releases of Fedora moved bin and sbin under /, leaving symbolic links for /usr/bin and /usr/sbin to point to the new directories under / but still creates system users in /etc/passwd with home directories as /usr/bin and /usr/sbin. The impact of this is that home directory checks for those users have to be overridden to permit lrwxrwxrwx as a 'secure'/safe setting, which it obviously is not. I may implement checks to cope with the use of symbolic links in this case at some point, but the issue is that if user home directories are set to symbolic links instead of directories the toolkit does not handle that currently.

A customisation file per server is really required if you have multiple OS's. CentOS7 still uses /usr/bin and /usr/sbin for files, so SUID file checks must be different for each OS, Fedora will have a suid file at /bin/wall and CentOS7 will have it at /usr/bin/wall so the ALL.custom file cannot be used for both OSs as files are in different places. This divergence gets even worse with BSD based systems. And CentOS7 and CentOS8 have different suid files; just impossible to have a generic custom file for all servers.

There are also lots of critical alerts plus lots of warning generated on systems with Docker running containers as in the docker overlay2 directory it keeps copies of setuid files for each container; plus with my docker containers I keep the userid numbers well away from 'real' users on the host so there are 'bad ownership' alerts logged for user numbers as those numbers do not exist as valid users on the docker host.
A customisation file entry exists to suppress the suid file alerts, it will however still generate a list of all suid files that were suppressed from alerting so you can manually review them.
A customisation file parameter also exists to suppress 'bad ownership' alerts for files under 'docker' directories if you get sick of seeing those.

There are also lots of critical alerts for Ubuntu distributions as these now supply many applications as SNAP packages which have their own copies of SUID files. A customisation file entry exists to suppress these suid file alerts, it will however still generate a list of all suid files that were suppressed from alerting so you can manually review them.

None of these are issues with the toolkit itself but the effect of different *nix implementations, I chose not to code for all possible OS's but only the ones I use.

Below are the know real issues with this toolkit


[Index]