In these days of firewalld not many people still use native iptables rules, but they certainly still have their place. I still use them on my main webserver simply because of the ease with which new drop rules can be added.
My old method of having all the rules in a simple script file and reloading everything when a new DROP rule needed to be added had to be thrown out when I started using docker, as my method flushed all chains including the docker chain which of course immediately started causing problems. As that makes my post of many many years ago on dynamic blacklisting out of date, so I thought I would document the main changes I had to make here for later reference as a glimpse into my reasoning for doing so.
First I documented the existing firewall chains in a flow diagram, made easy as the script file was of course fully commented :-), I then decided rather than updating existing chains I would insert a new one, so I
- created a completely new chain called “blacklist”. I decided to insert this chain into the chaining path used for external internet traffic only, it was jumped to instead of the normal chain for this path and if there were no drop matches it then jumped onward to the origional chain for this path
- changed my apache rewrite rules script to flush only the contents of that chain when replacing blacklist DROP rules, and of course reload the rules within that chain
- changed my daily batch job that removed duplicate DROP entries and adds new rules based on apache logfile scans to only work on that new blacklist chain
I had issues getting that working, due to a completely unrelated change, I had switched back to SELinux enforcing mode. In SELinux enforcing mode despite that fact there are zero AVC denies in any of the audit logs it is not possible to use the “sudo” command. I had forgotten about that, but after a bit of debugging tracked it down. Spent ages trying to figure out what SELinux was doing but with no denies being logged anywhere got nowhere, and reverted to SELinux permissive mode.
Normally that would not be an issue as I have regular batch jobs checking for denies to determine if new rules are needed to accomodate changes I make, in this case as there are no denies it is an issue to investigate later, in a seperate post.
In this case after reverting to permissive mode everythging appeared to be working as expected. At least the apache logs showed requests for resources that were obviously hacking attempts were being added to the new blacklist chain. There were however two issues with my placement of the new chain, which were
- a minor issue, another chain depth added which was hard to describe with script comments so relied on the flow diagram to show where it fitted (the flow diagram has been made part of the script comments)
- a slightly larger issue was as it was in the internet traffic chain I could not actually test it was working without leaving the house and connecting to a public wi-fi network
So I just moved the blacklist chain location directly onto the top of the global INPUT chain and and the end of the drop statements replaced the jump to another chanin with a simple jump RETURN back to the input chain. A minor benefit of that is a simple jump/return at the top of the INPUT chain makes the flow diagram and commenting in the script much easier, the major benefit of course is that as the chain is used on all INPUT not just that filtered as being from an internet source I am able to test the automatic blacklisting is working from my internal network. While a downside is that the drop rules are now parsed even for my trusted internal network iptables processing is fast and has no visible impact on normal traffic.
So, my setup for web server monitoring and automatic blacklisting is now
- the main firewall setup script run at server boot time to setup the iptables chains and rules, including the blacklist chain and DROP statements from the existing file built from the steps below
- apache rewrite rules in the httpd configuration, these rewrite rules are in a seperate file sourced into the httpd.conf with an include statement; having them in a seperate file allows them to be managed by puppet and deployed to any server I feel needs them. The rewrite rules trigger a script to automatically blacklist the ip-address that triggered a rewrite rule
- the above mentioned script logs why the ip-address was blacklisted, adds an iptables command to add a DROP for that source ip-address to the end of a list of blacklisted ip iptables commands, and uses “sudo” to run a seperate script to update the iptables blacklist chain (as of course the apache user cannot do that itself)
- the script run by “sudo” that flushes the blacklist chain rules and reloads the DROP statements from the file build by the rewrite rules script and add the needed final RETURN statement for that chain
- yes, there has to be one. A batch job that runs daily to remove duplicate ip-addresses from the file built by the rewrite rules, plus it also scans the apache access and error logs searching for other messages that indicate the need for ip-addresses to be blacklisted such as clients trying to force negotiation downgrades. That batch job then runs the script run above to flush the blacklist chain rules and reloads the DROP statements from the file built by the rewrite rules script and add the needed final RETURN statement for that chain
The above allows almost real-time blacklisting of client ip-addresses that attempt to access URLs that are obviously hacking attempts.
Not quite real-time in that it takes time for the scripts to run and hacking attempts hit with quite a few requests per-second which results in 4-5 duplicate ip-addresses added to the blacklist iptables command file before the first trigger completes and blocks it, which is why one of the tasks in the batch job is to clean up duplicate ip-addresses. And the batch job of course also is able to scan for problems in the apache error logs which the apache rewrite rules cannot detect.
By using a seperate chain I can flush and change the rules as needed without upsetting the chaining added/deleted as the docker service start/stops; which was the goal of my changes.
Obviously you should not reply just on apache log files to detect hacking attempts, even in the case where you only have http ports exposed to the internet. I would recomend at a minimum the use of the community edition of “snort” (even though you need to compile it and some pre-requisites from source) as an additional logging mechanism. In my environment where I run the webserver in a KVM instance the below message was logged by snort on both the host machine and the guest KVM instance running the webserver, which is fortunately not Drupal :-),
10/03-20:07:09.079690 [**] [1:46316:4] SERVER-WEBAPP Drupal 8 remote code execution attempt [**] [Classification: Attempted Administrator Privilege Gain] [Priority: 1] {TCP} 122.114.49.139:41577 -> 192.168.1.189:80
so additional tools are always of use, never reply on just one. In my case I run snort on all my machines and KVM instances with a weekly batch job to refresh the rules and a custom nagios plugin to monitor when alerts are written to the snort alert file so they are detected quickly in a totally hands off way.
Examples
A few cut down examples of how to set up automatic blacklisting. These are not working examples, although they probably will work. These are “cut-down” examples based on my scripts with functions inlined, directory paths changed, and all error handling removed in order to make them more readable for this post. These examples have not been tested, as I obviously use my much larger and much more complex scripts.
Also all these examples assume you are familiar enough with iptables to have created the “blacklist” chain used. Most simply to create an empty chain that simply returns to the calling chain it would be “iptables -N blacklist”, “iptables -A blacklist -j RETURN”. To implement the chain ideally you would append a jump to the chain in the section of your undoubtably complex iptables rules that suits you best, to simply make it the first entry in your INPUT chain as I do the very first entry for the INPUT chain should be “iptables -A INPUT -j blacklist”. Should you have failed to correctly script your rules you can force the blacklist chain to be the first entry in the input chain by using “iptables -I INPUT 1 -j blacklist” to insert it above rule 1 in the INPUT chain, but you should really make sure you underrstand your scripts and insert it correctly where you wish it to be in the first place.
You will also see in the examples I create the files I use under the webserver directory, that is definately not recomended but will work. It is recomended to place any sensitive file outside your webroot, but as this post is not going into things like the httpd servervice PrivateTmp flag and complicated SELinux rules the examples are in a way that should just work.
apache rewite rules example
RewriteEngine on
RewriteCond %{REQUEST_URI} (phpMyAdmin) [NC]
RewriteRule ^(.*)? /cgi-bin/block_ip.sh [NC,L]
RewriteCond %{REQUEST_URI} (etc?passwd) [NC]
RewriteRule ^(.*)? /cgi-bin/block_ip.sh [NC,L]
RewriteCond %{REQUEST_URI} (http:) [NC]
RewriteRule ^(.*)? /cgi-bin/block_ip.sh [NC,L]
RewriteCond %{REQUEST_METHOD} ="CONNECT"
RewriteRule ^(.*)? /cgi-bin/block_ip.sh [NC,L]
RewriteCond %{REQUEST_URI} (login.jsp) [NC]
RewriteRule ^(.*)? /cgi-bin/block_ip.sh [NC,L]
RewriteCond %{REQUEST_URI} (shell.php) [NC]
RewriteRule ^(.*)? /cgi-bin/block_ip.sh [NC,L]
RewriteCond %{REQUEST_URI} (cmd.php) [NC]
RewriteRule ^(.*)? /cgi-bin/block_ip.sh [NC,L]
script run by apache rewite rules example
script to reload blacklist example
You need to have created a “blacklist” chain in your iptables rules for this to work of course.
#!/bin/bash
# Flush our dynamically changing blacklist chain
iptables -F blacklist
# load all the blacklist rules
bash /var/www/html/data/blacklist.sh
# Anything not blacklisted RETURNs to the main INPUT chain
iptables -A blacklist -j RETURN
exit 0
#!/bin/bash BLACKLIST="/var/www/html/data/blacklist.sh" # This should be run by your system firewall startup scripts LOGFILE="/var/www/html/data/blacklist.log" # This should be automatically checked for changes # ---------------------------------------------- # Display a web page back to the requesting user # Yes it needs the blank line after the content type. # Do this before reloading the firewall rules or # the hacker will never see it. # ---------------------------------------------- cat << EOF Content-Type: text/html Status: 404 NotFound <html> <head><title>Unauthorised hacking attempt detected</title></head> <body bgcolor="yellow"> <h1>Site access violation</h1> <hr> <p> Unethical activity has been detected from your ip-address, specifically a request to attempt to access<br /> ${REQUEST_URI} <br />which this site considers unacceptable behaviour. </p> <p> Your ip-address <b>${REMOTE_ADDR}</b> has been automatically added to this sites blacklist. </p> </body> </html> EOF # ---------------------------------------------- # Now block all traffic to and from the ipaddress # that triggered this script. # If the blacklist file exists # - check if the entry exists, we do not want duplicates # - if the entry does not exist add it # - if the entry does exist we should not have triggered, log warning # ---------------------------------------------- # record into the blacklist file to use on system restarts echo "/sbin/iptables -A blacklist -j DROP -s ${REMOTE_ADDR}" >> ${BLACKLIST} # and "sudo" the script to add the updated drop rules file # /etc/rc.firewall can be run by apache with no password (configured in sudoers file) /usr/bin/sudo /etc/rc.firewall_blacklist # All done exit 0
Batch jobs
You should create batch jobs to remove duplicate ip-addresses from the backlist command script file, and also to create new blacklist drop rules from parsing the apache error logs.
No sample script is provided for that as mine is really to complicated to use as am example, but things you should be looking for in the error logs are messages such as “AH02042: rejecting client initiated renegotiation”, “AH01071: Got error ‘Primary script unknown'”, any error 400s for attempts to access pages that do not exist, and many other conditions depending on what your environment is. And yes you should have a batch job for it, you do not really want to look at the logs every day, just check them periodically for new conditions to add into checks in your batch job.
Update 11Dec2019
A faster and more efficient way of automatically blacklisting hackers is to make the apache configured 404 (page not found) error handler also use the blacklist cgi script when a page is not found rather than display a static error page.
That is what I have moved to now, after extensive website scanning to try to locate bad links before doing so. This method is a lot more effective as it will capture all the hacking attempts not in any rewrite rules in real time.
There is a lot more manual effort required afterward however, as you must review the blacklist logs often to determine if it was actually is a bad link on the website causing the error (where the user needs to be unblocked and the bad link fixed). In a well maintained website there should be no bad links of course, but inevitably they will pop up occasionally.