Is there a future for Docker ?

With Fedora no longer supporting Docker out-of-the-box (kernel changes are needed https://fedoraproject.org/wiki/Common_F31_bugs#Other_software_issues) and RHEL8 no longer supporting Docker (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/building_running_and_managing_containers/index) one may wonder if Docker itself has a future.

The reason RedHat give (in the link above) is that they wish to push all users onto their OpenShift solution, obviously useless for personal home labs; I mean who wants to pay for a license just to run a container. Also OpenShift does not permit single node clusters for development.

Both Fedora and RedHat recomend podman (mentioned in both links above) for individual/single-user development and testing of containers. Use of podman on any commercial/business system (or any system with more than one user) is not acceptable from a system management point of view. Using Docker all containers are centrally managed and system admins have a central view of what containers are running so can monitor and manage system resources, with podman however…

  • containers built for testing by users reside in their home directories, mutiple users can have their own personal copies of a container; disk space blowout plus no co-ordination needed between users
  • containers run by users via podman are spawned by users with their own container engine for each instance as opposed to docker managing all containers, so there is no ‘global’ view of running containers for system admins to use to track container host resource usage or identify what container is causing issues (and multiple users could be running copies of a container causing issues), and no way to even display running containers as they are spawned directly by users without any orchestration engine
  • users who should not be playing with containers may play with containers. With Docker only users in the ‘docker’ group can play with containers but any user can run podman (while the podman command could be secured so only users in a docker group could run it any user could obtain their own copy of the podman command)
  • any system administrator with a remaining brain cell would refuse to install podman on servers they manage

So effectively development of containers has in RHEL8+ moved to a commercial footing and Fedora can be considerd to have dropped support for it also. However if companies are happy to allow developers to use podman on their own personal/company devices (whether linux devices or using linux VMs) to be deployed to an orchestration engine at a later time that approach is possible and has no more impact on the loss of co-ordinatiin between developers than if all developers were running their own podman tests on a shared host.

However it is also worth noting that the Docker community no longer support/update docker packages in Fedora/CentOS repositories anyway, packages in OS distribution repositories are out of date as docker itself has begun the split between free/commercial, As with many opensource projects there is now a community edition of docker available at the repositories provided by Docker itself (the below was taken from a user forum response at https://forums.mobyproject.org/t/yum-centos-repo-is-broken-https-yum-dockerproject-org-repo-main-centos-7/543 providing the correct documentation for each OS to install Docker, the Fedora entry does mention kernel parameters need to change for F31 so the documentation at those links is being kept up to date).

centos/rhel: https://docs.docker.com/install/linux/docker-ce/centos/
debian: https://docs.docker.com/install/linux/docker-ce/debian/
fedora: https://docs.docker.com/install/linux/docker-ce/fedora/
ubuntu: https://docs.docker.com/install/linux/docker-ce/ubuntu/

As Docker has chosen the community/commercial path it will be interesting to see if a fork occurs such as when Oracle took that path on MySQL resulting in pretty much everybody moving to the fork MariaDB and ditching MySQL altogether (or StarOffice/OpenOffice being replaced by the forked LibreOffice; trying to make parts of a project commercial just makes people switch products). However as Docker is a collection of parts and Docker has with its “Moby” project started seperating parts of the stack into individual components pherhaps only the components needed for a new solution will be forked.

Docker itself at the current point in time has an issue in that it does not have the cgroups2 support needed to run under Fedora31+ and RHEL8+ meaning those sites that use Kubernetes to manage clusters on top of the docker-engine have to either delay upgrading infrastructure OS’s or switch to a new container runtime to be used by Kubernetes and ditch Docker alltogether.

As most clustering environments are cloud based infrastructure whether deployed as bare-metal or VM hosts, with the orchestration engine layer on top of that it simply takes a site to get one template working with a new container runtime engine for their entire environment to be redeployed with the new solution so Docker has a short time-frame to implement cgroups2 support or be replaced. For kubernetes to switch container engines from docker to rkt for example is as simple as setting “–container-runtime=rkt” on the kubelet service startup on eack worker node (if rkt is installed of course).

Obviously if data centers are using docker swarms, migration to an alternative product is going to be a major exercise (personally, all my container hosts have been changed from Fedora30 to CentOS7 to give me time to investigate options as C7 still has another four years before EOL).

However the documentation pages on “coreos.com” where I was looking at the rkt command have a banner at the top of every page stating “These docs are deprecated while they are being migrated to Red Hat. For the most up to date docs, please see the corresponding GitHub repository.”. As the Fedora “Atomic Host” projects have been scrapped since RedHat purchased CoreOS (reference:https://www.projectatomic.io/ and replaced by non-functional fedora CoreOS ‘preview’ images (which I am sure will one day be useful, or not as the Atomic Host images simply did not work in cluster deployment anyway so why would the replacements work) while not affecting a switch to rkt does raise another red-flag that RedHat is forcing users to move toward its products as CoreOS was one day to be the do-all-end-all for cloud deployments but may end up as another commercial (or community/commercial) product.

Personally I hope the Docker team get it together as a Docker ‘swarm’ is a lot easier to setup and keep running than a Kubernetes cluster (if anyone using Kubernetes denies that they have spent days trying to sort out a node ‘not ready’ problem they are telling a lie). While most tutorials on the internet are examples of getting Kubernetes to run using the docker container engine there are many different runtime container engines available other than Docker as discussed in this article containers and their engines on the techgenix site.

RedHats decision to push orchestration of containers to their commercial OpenShift kubernetes product will hurt RedHat more than Docker. While the bulk of non-commercial Linux enthusiasts will just walk away from commercial products entities such as training organisations are not going to switch from a free infrastructure to a paid one so will put emphasis on training for competing products they can run in-house.

What will hurt Docker is that if it cannot support cgroups2 within a few months commercial users that must upgrade their OS’s will have no choice but to stop using docker and switch to an alternative.

Update Jun 27 2020
The docker provided docker-ce package (for el7) now runs correctly on CentOS8 so it looks like that release supports cgroups2 now.

# documentation resource was https://www.liquidweb.com/kb/how-to-install-docker-on-centos-8/
# although the cli now works so downgrade may not be needed
dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
dnf list docker-ce
dnf install docker-ce --nobest
usermod -aG docker root
usermod -aG docker mark
systemctl enable docker
id root
id mark
systemctl start docker
Posted in Automation, Unix | Comments Off on Is there a future for Docker ?

An update on a couple of old posts

As you may recall I have posted earlier on installing snort as an intrusion detection system (IDS).

And also an earlier post on dynamically adding firewall blacklist rules based upon traffic hitting my website based upon apache rewrite rules to trap known hacker attempts and custom error pages to do the same.

This post is just a quick update on those two posts.

For snort it is simply a reminder that you do need such tools installed to get a clear description of what the hackers are trying to do. I have a nagios/nrpe script that monitors the ‘alert’ file and reports to nagios when events have been logged which I then review to ensure the ip-addresses reported by snort have already been blacklisted by my dynamic blacklist scripts.

The logs from the blacklisting scripts just log the URL used wheras snort logs meaningfull alerts such as the below

06/06-18:08:00.381925  [**] [1:648:14] INDICATOR-SHELLCODE x86 NOOP [**] [Classification: Executable Code was Detected] [Priority: 1] {TCP} 136.186.1.76:80 -> 192.168.1.170:44208
06/09-06:27:41.904537  [**] [1:402:11] PROTOCOL-ICMP Destination Unreachable Port Unreachable [**] [Classification: Misc activity] [Priority: 3] {ICMP} 110.249.208.7 -> 192.168.1.189
06/14-00:19:15.538150  [**] [1:401:9] PROTOCOL-ICMP Destination Unreachable Network Unreachable [**] [Classification: Misc activity] [Priority: 3] {ICMP} 101.98.0.98 -> 192.168.1.189
07/01-20:21:27.076636  [**] [1:1390:11] INDICATOR-SHELLCODE x86 inc ebx NOOP [**] [Classification: Executable Code was Detected] [Priority: 1] {TCP} 113.20.13.217:80 -> 192.168.1.173:54192
07/13-13:38:57.644088  [**] [1:402:11] PROTOCOL-ICMP Destination Unreachable Port Unreachable [**] [Classification: Misc activity] [Priority: 3] {ICMP} 202.180.64.10 -> 192.168.1.170
07/18-17:46:49.818183  [**] [1:1394:15] INDICATOR-SHELLCODE x86 inc ecx NOOP [**] [Classification: Executable Code was Detected] [Priority: 1] {TCP} 116.66.162.254:80 -> 192.168.1.189:5673
12/16-13:19:25.353700  [**] [1:31978:5] OS-OTHER Bash CGI environment variable injection attempt [**] [Classification: Attempted Administrator Privilege Gain] [Priority: 1] {TCP} 95.213.176.146:55241 -> 192.168.1.193:80
12/17-01:26:50.840063  [**] [1:44687:3] SERVER-WEBAPP Netgear DGN1000 series routers authentication bypass attempt [**] [Classification: Attempted Administrator Privilege Gain] [Priority: 1] {TCP} 1.82.196.135:28768 -> 192.168.1.193:80
12/17-01:26:50.840063  [**] [1:44688:3] SERVER-WEBAPP Netgear DGN1000 series routers arbitrary command execution attempt [**] [Classification: Attempted Administrator Privilege Gain] [Priority: 1] {TCP} 1.82.196.135:28768 -> 192.168.1.193:80

The updates to the automated blacklist scripts were a bit more involved, the major change is that I can retire the apache rewrite rules as I now use apache custom error pages to create the dynamic blacklist rules.

The rewrite rules as you recall were to trap known ‘hacker attempts’ such as requests to phpmyadmin as redirect them to a cgi script that added a iptables drop rule for the requesting ip-address.

Rather than use apache rewrite rules, bearing in mind that URL requests to non-existant resources will trigger a webserver 404 error (page not found) you could simply change the apache error page configuration for 404 errors to run the same script that was called by the rewrite rules.

Issues with that method (and with the rewrite rules method) are simply that it is unwise to call a CGI script with any data passed by a client and while a scipt can do a lot of character translation to try to be safe it is better to use existing libraries; so my new method is to have a small php error page that sanitises the input and then invokes the original script passing the parameters it needs. An example of the php error page would be

<?php
header("HTTP/1.0 404 Not Found");
$xx = escapeshellcmd( $_SERVER['REQUEST_URI'] );
$xx = escapeshellarg( $xx );
$cmd = "/home/httpd/newsite/cgi-bin/error_404_handler.sh '".$xx."' '".$_SERVER['REMOTE_ADDR']."' '".$_SERVER['REQUEST_METHOD']."'";
exec( $cmd, $output );
foreach( $output as &$xx ) {
echo $xx;
}
?>

Important notes on the script are that I had to modify the script to detect if parameters were being passed for the following reasons

  • if parameters are passed to the CGI script they are used instead of using shell variables, as if the script is called from PHP then the shell variables will not be set
  • if parameters are passed to the CGI script the script assumes that the 404 response header has been set by PHP and does not set the header itself, if no parameters are passed the script sets the 404 header field in its text output allowing the script to still be used by rewrite rules

It is also of course extremely important that you check every link on your website to make sure it does not refer to a non-existant page or you will blacklist valid users. I used owasp-zap on the ‘kali’ distribution to search for all bad links and also fixed a lot of redirects that used to be for http to be https; and was 100% confident there were no bad links before switching on the 404 error page as a blacklisting tool.

Additionally I also created a 400 error page handler (bad requests to server) similar to the 404 page not found handler to trap requests with illegal http headers and deliberate bad characters/requests in the URI that indicate hacking attempts. That is because requests such as the below cause server 400 errors rather than 404 page not found errors and we do want to trap them

172.105.94.201 - - [05/Jan/2020:19:29:28 +1300] "\x16\x03\x01" 400 226 "-" "-"
172.105.94.201 - - [05/Jan/2020:19:29:31 +1300] "\xbd\xff\x9e\xffE\xff\x9e\xff\xbd\xff\x9e\xff\xa4\xff\x86\xff\xc4\xff\xbe\xff\xc7\xff\xdb\xff\xee\xffx\\d9\xff\xed\xff\xa4\xff\x9d\xff\xcf\xff\xd8\xff\xe5\xff\x04\xff\x12\xff0\xff\xb1\xff\xbd\xff\xe7\xff\xe2\xff\xdd\xff\xdc\xff\xde\xff\xc8\xff\xcc\xff\xbe\xff\xf8\xff&\xff\x01\xff\x0f\xff\xf5\xff\x06\xff\xff\xff\xf7\xff!\xff\xde\xff\x02\xff&\xff\x0c\xff\x01\xff\xf5\xff" 400 226 "-" "-"
223.155.162.30 - - [06/Jan/2020:01:44:17 +1300] "POST /HNAP1/ HTTP/1.0" 400 226 "-" "-"
81.213.225.47 - - [07/Jan/2020:02:28:20 +1300] "GET / HTTP/1.1" 400 226 "-" "-"
112.184.218.41 - - [07/Jan/2020:04:41:03 +1300] "GET / HTTP/1.1" 400 226 "-" "-"
185.100.87.248 - - [07/Jan/2020:17:46:19 +1300] "\x16\x03\x01\x02" 400 226 "-" "-"

Note the the 404 errors for root path URIs which may seem normal would occur when the client request contained non compliant http headers or headers that indicate session spoofing is occurring.

And of course you need to review why ip-addresses are being blacklisted to ensure you have not accidentally created a link to a non-existent page on the site that start blacklisting valid users and search crawlers.

Posted in Automation, Unix | Comments Off on An update on a couple of old posts

Obtaining ethernet interface statistics under linux

The power surges and power outages hitting titahi bay in the last week adversely affected my main desktops ethernet card, all network connectivity died and th ethernet port status light became red instead of green.

Swapped out the network switch with a spare, still no network connectivity. Placed back original switch, other machines connected to that switch still had connectivity but not the desktop.

Rebooted the desktop, network connectivity restored but the ethernet port status light is still red instead of green. It is on the motherboard so having to replace it would be expensive. I have a apple USB network adapter lying around somewhere I will hunt out along with the linux drivers for it as I have had that working under linux before as the interface for a triple boot linux/windoze/hackintosh(snow leopard) system so should be able to get that working, plus a few spare USB wireless adapters I have had running under linux before if needed although I am short of USB ports.

Anyway I wanted to check the interface stats to see if the error condition was causing packet re transmissions and discovered the tools are not that useful.

The recommended tool is ethtool but that only provides partial statistics as seen below.


[root@phoenix ~]# ethtool -S enp3s0
NIC statistics:
tx_packets: 9581
rx_packets: 8774
tx_errors: 0
rx_errors: 0
rx_missed: 0
align_errors: 0
tx_single_collisions: 0
tx_multi_collisions: 0
unicast: 8496
broadcast: 55
multicast: 223
tx_aborted: 0
tx_underrun: 0

There are a lot more statistics available, ethtool for linux (for Fedora anyway) as seen above only reads some of them.

All the ethernet statistics available for Linux systems are available in the directory /sys/class/net/xxxx/statistics where xxxx is the interface name. For example for my interface name of enp3s0 we have all these files

[root@phoenix ~]# ls /sys/class/net/enp3s0/statistics
collisions rx_crc_errors rx_frame_errors rx_over_errors tx_carrier_errors tx_fifo_errors
multicast rx_dropped rx_length_errors rx_packets tx_compressed tx_heartbeat_errors
rx_bytes rx_errors rx_missed_errors tx_aborted_errors tx_dropped tx_packets
rx_compressed rx_fifo_errors rx_nohandler tx_bytes tx_errors tx_window_errors

So rather than use ethtool to obtain statistics it is better to use a script to check all the files as below. Obviously for my use I have expanded the script to check a parameter is provided and that the interface exists but the script below works and serves to provide a working example.


#!/bin/bash
ifname="$1"
ls /sys/class/net/${ifname}/statistics | while read fname
do
data=`cat /sys/class/net/${ifname}/statistics/${fname}`
echo "${fname}: ${data}"
done

The script basically returns all stats…

[root@phoenix ~]# ls /sys/class/net/enp3s0/statistics | while read x
do
   data=`cat /sys/class/net/enp3s0/statistics/${x}`
   echo "${x}: ${data}"
done
collisions: 0
multicast: 1506
rx_bytes: 13439448
rx_compressed: 0
rx_crc_errors: 0
rx_dropped: 0
rx_errors: 0
rx_fifo_errors: 0
rx_frame_errors: 0
rx_length_errors: 0
rx_missed_errors: 0
rx_nohandler: 0
rx_over_errors: 0
rx_packets: 29130
tx_aborted_errors: 0
tx_bytes: 4313586
tx_carrier_errors: 0
tx_compressed: 0
tx_dropped: 0
tx_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_packets: 31090
tx_window_errors: 0
[root@phoenix ~]#

Anyway, you now know the easiest way to get ethernet statistics in Linux.

Obviously for real-time traffic throughput you still need iftop.

Posted in Unix | Comments Off on Obtaining ethernet interface statistics under linux

An update on my conversion from Fedora30 to CentOS7; Bugzilla

As mentioned in my earlier post on converting my webserver from Fedora30 to CentOS7 it was fairly painless.

I have found one new issue since then, which is that the latest version of mariadb shipped with CentOS7 does not support features needed by the latest vesrion of Bugzilla.

On Fedora30 I was using bugzilla bugzilla-5.0.6 installed from the Fedora repositories; and it seems extremely lucky I did not bother to update the database schema as I should have (it was upgraded by a ‘dnf update -y’ so I missed it had been updated luckily).

Bugzilla does not exist in the CentOS7 repositories but all versions are available from http://ftp.mozilla.org/pub/webtools/ which includes 5.0.6. Which unfortunately is not usable on CentOS7.

As mentioned in my earlier post all the mariadb databases from my Fedora30 system had been dumped, and loaded into the CentOS7 system without issues, so I have an database I wish to keep the contents of.

Trying to upgrade the database to bugzilla-5.0.6 from the mozilla.org website results in the following errors.

Converting table bugs_fulltext... DBD::mysql::db do failed: The used table type doesn't support FULLTEXT indexes [for Statement "ALTER TABLE bugs_fulltext ENGINE = InnoDB"] at Bugzilla/DB/Mysql.pm line 391,  line 747.
	Bugzilla::DB::Mysql::bz_setup_database('Bugzilla::DB::Mysql=HASH(0x64ecdc0)') called at ./checksetup.pl line 123

Being rather bloody minded at this point I took a dump of that database and then dropped the existing database and created a new empty one and tried checksetup.pl again, as it should create all the tables if none exist. And again I get errors. If it cannot be installed as a fresh install there is no hope of upgrading.

Adding new table bz_schema...
Initializing bz_schema...
Creating tables...
DBD::mysql::db do failed: The used table type doesn't support FULLTEXT indexes [for Statement "CREATE FULLTEXT INDEX `bugs_fulltext_short_desc_idx` ON `bugs_fulltext` (short_desc)"] at Bugzilla/DB.pm line 848,  line 747.
	Bugzilla::DB::_bz_add_table_raw('Bugzilla::DB::Mysql=HASH(0x5bddfd0)', 'bugs_fulltext', 'HASH(0x5d74098)') called at Bugzilla/DB.pm line 809
	Bugzilla::DB::bz_add_table('Bugzilla::DB::Mysql=HASH(0x5bddfd0)', 'bugs_fulltext', 'HASH(0x5d74098)') called at Bugzilla/DB.pm line 518
	Bugzilla::DB::bz_setup_database('Bugzilla::DB::Mysql=HASH(0x5bddfd0)') called at Bugzilla/DB/Mysql.pm line 575
	Bugzilla::DB::Mysql::bz_setup_database('Bugzilla::DB::Mysql=HASH(0x5bddfd0)') called at ./checksetup.pl line 123

So I reloaded my database dump back into mariadb and settled on the version of Bugzilla which does work on CentOS7 which was the bugzilla-5.0 release which (as I had not updated the database schema on F30 was still a usable database) is working OK on CentOS7.

As far as I am aware that is the last issue to fix in my migration from F30 to C7.

Annoyingly the admin dashboard did keep saying there is an update available to new version 5.0.6; so I disabled that message as there is no point in having it.

So for CentOS7 I cannot upgrade Bugzilla beyond 5.0… but am still stuck on CentOS7 as I will only look at CentOS8 when the docker maintainers have a release of Docker that will work on CentOS8 (or Fedora as my only reason to move to CentOS7 was docker is not supported on F31 onward). As C7 is not EOL until 2024 I can wait.

Posted in Unix | Comments Off on An update on my conversion from Fedora30 to CentOS7; Bugzilla

Off on a tangent again, WebAssembly

WebAssembly is now has official specifications published by W3C. Highlighted by this article on the register site.

WebAssembly is a standard developed by the W3C WebAssembly Working Group. Today all modern browsers (Chrome, Firefox, Safari, Edge, mobile browsers) and Node.js support it.

Enscripten which in the past was used to convert C/C++ source code to asm.js now also supports converting C/C++ code to WebAssembly code. This also has built in support for Makefile (using ’emmake make’ rather than the normal ‘make’) making porting code easy. Plus it also seems to have inbuilt support for OpenGL so C apps using that can be ported easily.

WebAssembly should allow code to run at near native speed on a users browser, and while it is unlikely to replace existing complex applications I expect most new applications will be written in it. It does not replace javascript yet but interacts with it allowing both tools to be used.

As all of the links references in this post have different locations on github from which to download the Enscripten utilities, I suggest a google search to find the latest, at the time of writing this post the latest installation steps are

git clone https://github.com/emscripten-core/emscripten.git
cd emsdk
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh

Sites with good WebAsembly examples

There are lots of posts available on using WebAssembly, these are a few good references.
Full documentation https://developer.mozilla.org/en-US/docs/WebAssembly

An article on using existing C code to webassembly (inclusing using the virtual filesystem) at https://www.smashingmagazine.com/2019/04/webassembly-speed-web-app/

There is this very detailed post on how Ross Smith ported the C++ game Funky Karts to WebAssembly including user input and audio. It also touches on the Makefile syntax needed and covers the issues he had and how they were resolved. As it was written in 2017 WebAssembly has progressed quite a bit
https://www.rossis.red/wasm.html

Another useful reference would be this post here which covers getting aound issues using pointers and arrays between javascript and WebAssembly modules. https://medium.com/@kestrelm/creaturepack-high-performance-2d-webgl-character-animation-with-webassembly-72c436bec86c

I suggest you google a few examples yourselves, there are examples out there of full blown monolithic native Desktop C/C++ applications that have been pretty much just picked up and dropped into a web bwowser.

Why was this post titled off on a tangent again ?

It is something else I want to look into, even though I do not have the time. Plus I need an application to port; although I will probably knock up a few quick gtk applications with menu bars and input areas to see if that ports (as I really don’t want to also start using opengl as each distro ships that differently so it would be a pain [fedora31 for example doesn’t even have opengl, replacing it with mingw]).

So it is added to my todo list along with android app programming (which is on a temporary halt as I want to ditch the java stuff I started and learn to use Kotlin, but got sidetracked by looking at responsive apps google has been pushing in preference to native apps, and slightly lost motivation when the useful app I wanted to create I provided the functionality for in under 5mins coding on my website as a server based app).

So, it is another tangent. Just adding to the list of things I want to learn.

Posted in personal | Comments Off on Off on a tangent again, WebAssembly

An alternative to git, fossil a quick review

First a disclaimer, after reviewing fossil I will be remaining with git as my source control system. From a command line perspective it provides no benefits to me over git and while it can sync changes made in fossil to a remote git (ie:github) repository it cannot detect/sync changes made to the remote git repository so does not syncronise the other way. As I use multiple machines to work with my github repository plus require the use of github as it is my ‘offsite backup’ for source code that makes it not suitable for me.

If you intend to replace git completely and do not use github or any local git repository it can provide you the same functionality as git, in that it also is a distributed version control system that allows pulling and pushing against remote fossil repositories. One advantage over git is that it is simple to create your own remote repository (in theory); it does however not have any external free storage repository such as a github equivalent so you must manage your own repositories (and probably lose everything if you are just using it for small development projects and have no external backup source).

It should also be noted that fossil runs on multiple platforms wheras git is only for Linux servers, so if you are developing on multiple platforms you could store all projects being managed on a single webserver hosting all the project repositories.

For migration it is possible to convert project repositories between fossil and git formats with simple export/import commands, I have an example of moving a git application to fossil later in the post.

The fossil application and documentation is available at https://fossil-scm.org/fossil/doc/trunk/www/index.wiki.

It should be noted that fossil provides many additional features over git on a per-project basis such as a WIKI and problem ticketing system for each individual fossil project. While this may be of benefit if you only have one application to manage in an environment where there is more than one application an external global ticketing system is obviously a better option; and I would assume all WIKI content would be lost if you converted from fossil to git although if you chose to stick with fossil it is obviously useful.

It should also be noted that I have been completely unable to get the ‘fossil push’ operation to work.

Interestingly if you are not at all interested in source code management but are looking for a help-desk type problem system fossils built in ticketing system user forum, wiki and tech note features along with it’s requirement that users be specifically added for update access may make it a perfectly useful problem management system.

The major differences between git and fossil

A major difference between git and fossil is that git uses the Linux host filesystem for file archiving and versioning, fossil uses SQLite databases for each project, as fossil was written to support SQLite development. While some people may say this makes it easier to move development from one machine to another simply by scp’ing the one SQLite database file containing the project I see no real benefit over a simple ‘git export’, scp and ‘git import’ to move a git project.

Another major difference between git and fossil is that git is pretty much open on who can make changes available encouraging contributers, fossil works in reverse permitting only trusted users to make changes so for each project you must create/authourise users.

Fossil also provides on a per-project basis a WIKI, Ticketing system, user forum and tech-note features al contained witrhin the same fossil binary that provides the command line interface to the repositories.

The basic source versioning funtions

There will be a learning curve in switching from git to fossil as while the
commands are similar they are not identical, so git users will be extremely frustrated in switching to fossil as commands ‘almost work’ but need tweaking.

Also frustrating is that fossil has an autocommit feature that will attempt to syncronise commits with the upstream repository whenever a local commit is made. While this is a good idea as it ensures everybosy is wotrking with the latest merged copy it will cause commit errors for users if they have not been configured properly, as seen below.

Autosync:  http://localhost/cgi-bin/fossil.sh/mvs38j_utils
Round-trips: 1   Artifacts sent: 0  received: 0
Pull done, sent: 439  received: 824  ip: 127.0.0.1
empty check-in comment.  continue (y/N)? y
New_Version: f691c302f1b2b0dedfd4cd60bd91a965dc83ce84aaea86e705b24adcb7887cd9
Autosync:  http://localhost/cgi-bin/fossil.sh/mvs38j_utils
Round-trips: 1   Artifacts sent: 2  received: 0
Sync done, sent: 1142  received: 859  ip: 127.0.0.1
Warning: The check-in was successful and is saved locally but you
         are not authorized to push the changes back to the server
         at http://localhost/cgi-bin/fossil.sh/mvs38j_utils

The issue with remote authentication to the fossil remote repository is mentioned quite often in this post so read on.

Both git and fossil support the normal versioning and branching features needed by a version control system. For use as a local machine versioning application they provide similar features.

Remote repositories

Obviously a source management system these days is only useful if source being worked on by multiple people can be consolidated in a central repository.

Unlike the complexity of git it is simple to create a remote repository using fossil as if you have a webserver running already it can be easily configured to run the fossil binary as a CGI interface to provide access to one or multiple repositories. The fossil binary can also run as a server to provide remote access to a single repository.

Note: for local use you can also use the ‘fossil ui’ command will launch a web browser to access web features for a repository on the local machine, if your local machine is running a GUI desktop of course which most development environments will not be.

I chose the CGI method as I had a working apache webserver and it allows multiple projects to be available. An example on how I set that up is at the end of this post.

The main difference between running fossil as a server or a CGI script refering to a single repository and my chosen method of using CGI to provide multiple repositories is simply that a project name must be appended to the remote URL. In the clone example below if the fossil server or a CGI script refering to a repository is used you would use the first command below, if using CGI and specifying a path to multiple repositories you would use the second command to identify the repository.

fossil clone http://localhost/cgi-bin/fossil.sh mvs38j_utils.fossil
fossil clone http://localhost/cgi-bin/fossil.sh/mvs38j_utils mvs38j_utils.fossil

Clone a repository

One thing is important to note when cloning a repository, the project id and admin user password is changed for the local copy.

[mark@vosprey3 temp]$ /home/fossil/fossil clone http://localhost/cgi-bin/fossil.sh/mvs38j_utils mvs38j_utils.fossil
Round-trips: 2   Artifacts sent: 0  received: 181
Clone done, sent: 548  received: 652209  ip: 127.0.0.1
Rebuilding repository meta-data...
  100.0% complete...
Extra delta compression... 
Vacuuming the database... 
project-id: f51c618d56d51898d27f0f46225829bf32931c73
server-id:  6472f0d2048663b39577dddb4ddc8c70c35a3852
admin-user: mark (password is "fAXrZB4vdP")

Another thing to note is that the change seems to serve no purpose other than to obscure the actual password used on the fossil remote repository, no password is not needed to open, branch or coomit to the local copy.

What is cloned is the SQLite database itself rather than a filesystem tree git users are used to. A ‘fossil open’ or ‘fossil branch’ of the local copy from a different work directory is needed to access the files within the repository.

[mark@vosprey3 temp]$ ls
mvs38j_utils.fossil

Pull from the remote repository

You cannot ‘pull’ a remote repository unless you have cloned it to a local copy first. Or createing an new empty project to pull into may work but I have not tried that, either way an extra step.

[mark@vosprey3 temp]$ /home/fossil/fossil pull http://localhost/cgi-bin/fossil.sh/mvs38j_utils -R mvs38j_utils.fossil
repository does not exist or is in an unreadable directory: mvs38j_utils.fossil

After a clone it is fine.

[mark@vosprey3 temp]$ /home/fossil/fossil clone http://localhost/cgi-bin/fossil.sh/mvs38j_utils mvs38j_utils.fossil
Round-trips: 2   Artifacts sent: 0  received: 181
Clone done, sent: 548  received: 652209  ip: 127.0.0.1
Rebuilding repository meta-data...
  100.0% complete...
Extra delta compression... 
Vacuuming the database... 
project-id: f51c618d56d51898d27f0f46225829bf32931c73
server-id:  e1f90ed7c925f8da14a0a8ce997dbe28eb46138b
admin-user: mark (password is "r4Fg2wbiTW")

[mark@vosprey3 temp]$ /home/fossil/fossil pull http://localhost/cgi-bin/fossil.sh/mvs38j_utils -R mvs38j_utils.fossil
Round-trips: 1   Artifacts sent: 0  received: 0
Pull done, sent: 322  received: 824  ip: 127.0.0.1
[mark@vosprey3 temp]$ 

Push to the remote repository

Does not appear to be possible to push to the source repository.

In the repository configuration on the remote server ‘Allow HTTP_AUTHENTICATION authentication’ has been set as permitted, however using no authority info, using the remote repository setup user/password, and using the local copy user/password all fail to push to the remote repository.

Also in the user configuration on the remote server side my userid was given access to every privalege (every checkbox was checked).

[mark@vosprey3 temp]$ /home/fossil/fossil push http://localhost/cgi-bin/fossil.sh/mvs38j_utils -R mvs38j_utils.fossil
Round-trips: 1   Artifacts sent: 0  received: 0
Error: not authorized to write
Round-trips: 1   Artifacts sent: 0  received: 0
Push done, sent: 780  received: 391  ip: 127.0.0.1

[mark@vosprey3 temp]$ /home/fossil/fossil push http://localhost/cgi-bin/fossil.sh/mvs38j_utils -R mvs38j_utils.fossil -B mark:LhikpKwBeP
Round-trips: 1   Artifacts sent: 0  received: 0
Error: not authorized to write
Round-trips: 1   Artifacts sent: 0  received: 0
Push done, sent: 824  received: 391  ip: 127.0.0.1

[mark@vosprey3 temp]$ /home/fossil/fossil push http://localhost/cgi-bin/fossil.sh/mvs38j_utils -R mvs38j_utils.fossil -B mark:r4Fg2wbiTW
Round-trips: 1   Artifacts sent: 0  received: 0
Error: not authorized to write
Round-trips: 1   Artifacts sent: 0  received: 0
Push done, sent: 824  received: 391  ip: 127.0.0.1
[mark@vosprey3 temp]$

Obviously this is a show stopper. Just as obviously there must be a way to do it but the documentation is not clear.

Migrating a git project to a fossil project

That is covered quite well at https://fossil-scm.org/home/doc/trunk/www/inout.wiki.


Fossil has the ability to import and export repositories from and to Git. And since most other version control systems will also import/export from Git, that means that you can import/export a Fossil repository to most version control systems using Git as an intermediary.

Below is how simple it is, I exported one git project, scp’ed the output to a machine with fossil on it, and imported it. It is that simple to move a git project to a fossil project. Not only managed source is imported but also a full checkin/change-history.

[mark@phoenix mvs38j_utils]$ git fast-export --all > /home/mark/mvs38j_utils.git.data
[mark@phoenix mvs38j_utils]$ scp /home/mark/mvs38j_utils.git.data mark@vosprey3:/home/mark
[mark@vosprey3 fossil]$ cat ~/mvs38j_utils.git.data | ./fossil import --git mvs38j_utils.fossil
Rebuilding repository meta-data...
  100.0% complete...
Vacuuming... ok
project-id: f51c618d56d51898d27f0f46225829bf32931c73
server-id:  24bc9ec6173188540a5598e64c2ce0d548177456
admin-user: mark (password is "LhikpKwBeP")
[mark@vosprey3 fossil]$ 

The admin-user/password combination can be used to login to the fossil web interface whether by fossil ui if you are on a desktop of via the CGI scripts if you have fossil on a headless webserver to perform further customisation for the project. This should be such things as creating a home page for the application (which is a WIKI page with the same name as your project), customising server settings and users etc.

Integrating fossil into an existing WebServer

I created a new directory specifically for fossil as /home/fossil. I did not create a new fossil user as of course it is the apache user that needs access to the files; and the directory could be placed anywhere apache has full access to. It should also be noted that should you have any users on the server that need access to work with the project on the server itself they need full access. I also placed the fossil binary in that directory.

Using /home/fossil as a directory this is how I set it up under apache. Note that xxx.fossil files need to be created for projects iin that directory, whether as empty fossil created projects or imported projects from git as discussed earlier.

mkdir /home/fossil
chmod 777 /home/fossil
touch /home/fossil/fossil_errors.log
chmod 666 /home/fossil/fossil_errors.log
cat << EOF > /var/www/cgi-bin/fossil.sh
#!/home/fossil/fossil
directory: /home/fossil
#repository: /home/fossil/mvs38j_utils.fossil
errorlog: /home/fossil/fossil_errors.log
notfound: http://mdickinson.dyndns.org/error_pages/no_fossil_repo.html
repolist
timeout: 120
setenv: SQLITE_TMPDIR=/var/tmp
setenv: TEMP=/var/tmp
setenv: FOSSIL_HOME=/home/fossil
EOF
chmod 755 /var/www/cgi-bin/fossil.sh

Key vaules to note are ‘repository’ plus the ‘directory’ and ‘repolist’ combination.

If ‘repository’ is used that is the only repostory available from this server, obviously that is fairly limiting. If the URL http://your.web.site/cgi-bin/fossil.sh is used that repository is presented. And that is also the URL for clone/pull/push to that one database.

However if ‘directory’ is used you may place many xxx.fossil databases in the directory and access them by name if you know the name and use a URL inclusing the name such as http://localhost/cgi-bin/fossil.sh/mvs38j_utils, browsing to the URL above without a database name would give an error. However if the ‘repolist’ value is also set then browsing to the URL above gives you a list of available databases you can use to select from.
For clone/pull/push you also need to provide the full database name you are using if using directory to host multiple repositories rather than repository to support only one repository.

Additional features fossil provides via it’s web interface

To be useful fossil will need to provide the web interface via the CGI script setup so multiple users can use the features. While I believe they would be available via the ‘fossul ui’ command that is for single user use; and as I am testing fossil on a headless server (no gui) I cannot test that.

Via the web interface that comes bundled with the fossil binary you also have access to a project specific WIKI, Ticketing system, user Forum and tech notes. It also provides nicely formatted ‘timelines’ of changes made throughout the application life.

All those features are specific to the project you are working with and part of the SQLite database for the project.

  • If you are working on more than one project it would therefore make sense to use at a minimum a sperate Ticketing system
  • or if you are a small shop you could use a fossil project as nothing but a Ticketing system as the WIKI, Tech notes, and Forum would provide additional benefits for that

Summary

fossil can replace all the git functions, plus provides additional features such as a WIKI, Forum and Ticketing system for each project. It is useful for sites that have their own source code repository stucture and do not reply on external repositories such as github or gitlab.

If you already use github or gitlab your choices are to convert the projects to fossil databases and move everything in-house, or continue to use git. I personally push to github from multiple sources so fossil is of no use to me as github must be the authoritive source rather than a local fossil repository in my environment.

Also as noted many times in this post I was unable to get the ‘push’ function to the remote fossil repository working at all. I am sure it is possible but as I will not be proceeding down the fossil path I will not be banging my head against this and will leave it up to you to try if you are interested in trying fossil.

Posted in Unix | Comments Off on An alternative to git, fossil a quick review

Using “screen” on Linux

The screen utility is available on most Linux systems, including Fedora, CentOS and RedHat.

The history of screen is that it was an essential tool in the old days of dial-up access to servers, terminal sessions running under screen could survive a session disconnection; the user could just log back onto the server and re-attach to the screen session(s) without losing any work.

These days it is still extremely useful due to its ability to allow users to manually disconnect from screen sessions and re-attach to them at a later time. Very useful if you want lots of tasks happening but do not want to open lots of windows.

While there are terminal emulators out there now that allow split screen windowing the main advantage of screen over other tools is the ability of screen sessions to survive in detached mode reguardless of what terrible occurences occur that may kill your terminal/ssh session.

One specific and very valid use is when you perform yum/dnf updates on a remote server via ssh. I am sure we have all hit the situation where an update causes a network reconfiguration (network drops) killing the update midway through as the ssh session dies. If on the remote server ssh session you simply screen bash and perform the update within the screen session while the network will still drop and kill the ssh session when the network comes back you can simply ssh back onto the remote server and use screen -r to reconnect to the screen session which will still be performing the update quite happily.

This is because the default behaviour of screen is that autodetach currently defaults to on so it handles an attached client suddenly dying quite happily, while that can be changed in /etc/screenrc just do not change it.

But I personally find screen most useful in application startup scripts.

There are still applications in the wild today that need to be started from a terminal, either for logging or so that command input can be provided to them, some of these provide startup scripts that may redirect output to a log file and input from a pipe file to work around the issue, when it would be far simpler to simply start the application in a screen session.

It should be noted that applications that provide scripts that do all that messy redirection do so for portability, as not all *nix operating systems provide screen, for example AIX has a screen command that unfortunately does something completely different so screen cannot be used on that OS and the vendor might like their application to run on AIX.

However all Linux systems provide screen, and the many features it can provide in startup scripts for applications that require terminals seem to be overlooked. What screen provides for scripts is

An important thing to note within this post is that where I use CTRL-a I am refering to the Control-Key and ‘a’ key combination, do not type the ‘-‘ which is just in the post as most manuals refer to key combinations that way so familiar to most readers.

  • screen sessions can be ‘named’, allowing them to be easily identified and re-attached to
  • a ‘real’ terminal session, as far as the application is concerned it has been started in an interactive terminal
  • the screen session can be detached (become a daemon/background task) at the time it is created, it does not have to be manually detached, and it will continue to be an active terminal for the application
  • and of course at any time you can manually re-attach to any of the screen sessions for a full terminal experience in issuing commands and viewing output messages
  • you can also kill screen sessions by name should you not want to bother reconnecting to them to stop them, usefull for shutdown scripts
  • it cannot despite what the ‘man page’ says accept commands passed to a named screen session via the -X -S flags, at least I have never been able to get that to work

Also incredibly useful uses for screen outside of scripts are

  • At any time you can display a list of active screen sessions for the userid you are logged on as using screen -ls making it easy to re-attach to sessions
  • screen sessions survive any fatal error that affects the terminal/ssh session that started them, so sensitive/critical commands you do not want interrupted should be run in screen sessions
  • while the root user cannot list another users screen sessions root can connect to another users screen session with screen -r sessionowner/[pid.tty.host] if the user has manually set the screen session to be multi-user mode (not the default) and provided the session name. This is very useful for debugging user issues as exactly what is being seen by the user can be seen by the additional root connection to that screen session, which can also issue commands as the user owning the screen session. However to allow root to connect the user would need to within the screen session type CTRL-a:multiuser on followed by CTRL-a:acladd root to make the session multi-user and allow root to connect so not a security risk as the user is in full control of access
  • A single screen session can with “crtl-a-c” create new windows that are also full screen sessions, that can in the simplest form be tabbed bewteen using “ctrl-a-[space]”, this is usefull for those occasions where the only connection to the server may be the console for troubleshooting, you can just ‘tab’ betwen windows using “ctrl-a-[space]” to switch between a window constantly tailing a log file and a window issung commands for example; where with a normal console session only one thing at a time can be done

Example 1: multiple screen sessions

A simple example however of using screen in a script is shown below. The “-t” sets a title for the session window and the “-S” is the name of the session. The “-d -m” flags together indicates the session is to run detached. The last argument is simply the program the screen session is to run.


#!/bin/bash
screen -t bash_shell1 -S bash_shell1 -d -m bash
screen -t bash_shell2 -S bash_shell2 -d -m top

At this point when the script finished running screen -ls would show you have two detached sessions, you could re-attach to the first session with “screen -r bash_shell1” and find a normal shell session, use ctrl-a-d to detach, re-attach to the second session with “screen -r bash_shell2” and be connected to a session with ‘top’ running quite happily.

It should be noted that when a program started by screen ends the screen session ends, for example using ‘q’ to exit top or ‘exit’ to leave the bash shell running under screen will end the screen session you are connected to. The correct way to detach, if you intend to leave the session running, is to use ctrl-a-d. Obviously screen is most usefull for interactive use if you start bash in each one, but for startup scripts you can run any command that needs a terminal in immediate detached mode.

screen is obviously most useful in startup scripts where you have applications that need a real terminal(s).
I personally found the benefits of screen in writing a script to start turnkey3 under hercules that needed a terminal for hercules, a telnet session for the hardcopy device, and a couple of 3270 sessions for consoles that I used c3270 with scriptports to pipe the console commands into to start the system; on a headless server so doing so manually was a pain so scripting was essential and only screen (in this case combined with being able to issue console commands via scriptports to c3270) provides the functions needed.

Example 2: one screen session, multiple windows

At a terminal prompt simply type “screen” to start a screen session running the default shell. Within that session use crtl-a-c (control a c) to start another window within that screen session.
In the second session that you just created run “top”.
Use ctrl-a-[space] (control a spacebar) to switch to the first window.
“screen -ls” will show only one screen session running, that you are attached to.
Use ctrl-a-[space] (control a spacebar) to switch back to the second session and use “q” to quit top, then “exit” to close that window.
You will be back at the first window. “screen -ls” will show a screen session attached to of course, use ‘exit” to close the last window and end the screen session.

You can use ctrl-a-c up to nine times as needed within a screen session to create as many windows [0-9] as you need, you can switch to them by number with “ctrl-a-N” where N is the number of the window. As you can see being able to run multiple windows via a console session which would normally only have the one command prompt is invaluable for troubleshooting; as long as when you are done you remember to exit all screen windows.

Example 3: Sharing a screen session

It is possible to have multiple users attached to a screen session, although this is fully under the control of the user that starts the screen session. The obvious benefit of this is that if a user is having difficulty doing something in a terminal session such as starting an application that keeps failing a support person of sysadmin can share their screen session to see exactly what they are doing, and by default also be able issue corrective commands themselves.

To achieve this a user starts a screen session as normal and makes a note of the session id.

[mark@phoenix posts]$ screen bash
[mark@phoenix posts]$ screen -ls
There is a screen on:
        17349.pts-3.phoenix     (Attached)
1 Socket in /run/screen/S-mark.

At this point if the screen id (17349.pts-3.phoenix) was provided to an admin then using the ‘root’ userid the command ‘screen -r username/17349.pts-3.phoenix’ would produce an error that the session is not shared but private.

[root@phoenix ~]# screen -r mark/17349.pts-3.phoenix
There is a screen on:
	17349.pts-3.phoenix	(Private)
There is no screen to be attached matching 17349.pts-3.phoenix.

It is entirely up to the user to permit the session to be shared, which they can do from within their screen session with the key combination and command CTRL-a:multiuser on, which changes the ‘root’ users connection error message to be

[root@phoenix ~]# screen -r mark/17349.pts-3.phoenix
Access to session denied.

This is because the user owning the screen session must explicitly define who can access the screen session, which they then do in their screen session with the key combination and command of CTRL-a:acladd root, after which the ‘root’ user can connect to the users screen session.

By default an added ACL gives the added user full access to the screen session, there are options to limit that access such as to read only if needed you can find in the ‘man screen’ documentation.

One important thing to note is that if any of the connected users with write access to the session then type ‘exit’ into the screen session the session will end, users need to use the normal CTRL-ad combination to detach from the session if they want to leave it running.

Also note that a ‘screen -ls’ will show the screen session is in multiuser mode but will not show all the users attached to it.

[mark@phoenix posts]$ screen -ls
There is a screen on:
        17349.pts-3.phoenix     (Multi, attached)
1 Socket in /run/screen/S-mark.

You will have noticed that in my example I used the ‘root’ user as the user to connect to another users screen session, it is not required to use the root user. Any user that has read/write access to the screen sessions socket file can be permitted to attach to that users screen session, however it is more practical to use the ‘root’ user as having to ask the user to chmod their socket file could prove difficult as the ‘man page’ for screen lists many locations the socket file could be created in, and in Fedora30 it is not in any of those locations making it impossible. So on Fedora30 as ‘screen’ knows where to find the socket file and ‘root’ of course has access to it the root user needs to be used.

Issues with using screen in startup scripts on systems that use systemd

Screen was extremely useful in the old SYSV /etc/init.d startup script days, where a startup script would start a few processes and the startup script itself then exit, leaving the processes it started running. systemd does things differently.

If your startup script is still managed by chkconfig and in the old SYSV init.d filesystem directories at system boot time systemd servers (Fedora 30 certainly) will at boot time map them to systemd services. For example /etc/rc.d/init.d/my_script enabled by chkconfig can be displayed with systemctl status my_script.service, and even though it has run it will show as stopped and no processes started by it will be running anymore.

The reason for that is systemd by default expects the entire application service to be encapsulated by default and when the main service script stops running systemd will also stop all processes started by that main service script; by design to ensure a stopped service is cleaned up. So previously used init.d scripts designed to start things and exit cannot be used by systemd without modification, as systemd startup commands cannot be permitted to exit.

There are ways around that which I will cover with a later post on systemd, as I don’t wish this post on screen to become a systemd tutorial.

Posted in Unix | Comments Off on Using “screen” on Linux

Hackers, PCs of infected users, or researchers ?

There is an annoying amount of rubbish traffic to my website and below is a selected (grep’ed) portion of it.

The documentation URL logged describes the masscan tool as similar to nmap, its purpose is to find open ports on internet sites.

[root@vosprey3 httpd]# grep masscan access_log
163.172.47.200 - - [01/Dec/2019:06:16:45 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
80.241.221.67 - - [01/Dec/2019:07:12:01 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
159.65.11.106 - - [01/Dec/2019:08:08:35 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
149.129.243.159 - - [01/Dec/2019:11:40:56 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
173.249.49.151 - - [01/Dec/2019:13:54:12 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
79.143.188.161 - - [01/Dec/2019:17:58:22 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
146.196.55.181 - - [01/Dec/2019:20:09:34 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
159.65.187.159 - - [01/Dec/2019:21:18:40 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
202.168.64.24 - - [02/Dec/2019:00:35:20 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
206.189.237.232 - - [02/Dec/2019:01:26:36 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
173.249.51.194 - - [02/Dec/2019:09:43:41 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
52.6.12.150 - - [02/Dec/2019:11:34:08 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
167.99.40.21 - - [02/Dec/2019:12:29:29 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
167.99.40.21 - - [02/Dec/2019:12:29:35 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
138.68.247.104 - - [02/Dec/2019:18:51:55 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
207.180.220.8 - - [02/Dec/2019:22:26:47 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
142.93.187.70 - - [02/Dec/2019:22:35:14 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
5.189.188.207 - - [02/Dec/2019:23:57:36 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
5.189.134.236 - - [03/Dec/2019:05:25:31 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
165.227.4.106 - - [03/Dec/2019:06:22:25 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
167.99.130.208 - - [03/Dec/2019:06:24:38 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
207.180.224.136 - - [03/Dec/2019:08:07:02 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
5.189.162.164 - - [03/Dec/2019:09:43:27 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
207.180.213.201 - - [03/Dec/2019:11:50:21 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"
51.38.239.33 - - [03/Dec/2019:12:14:56 +1300] "GET / HTTP/1.0" 200 4418 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)"

Simply because the requests are coming from so many different ip-addresses it can be assumed they are from infected PCs or a hacker toolkit. Some may be requests from semi-legitimate port mapping sites like shodan, but I don’t want my ports mapped.

Interestingly my firewall rules are logging these but for different reasons, of the last three addresses looks like timeout-outbound/incomplete-handshake-inbound/timeout-outbound. All incomplete requests anyway, and all to port 80 (none to the other http port 443 or any other open ports) so a rather selective scan.

Dec  3 09:43:34 vosprey3 kernel: DROPPED IN= OUT=ens3 SRC=192.168.1.193 DST=5.189.162.164 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=13052 DF PROTO=TCP SPT=80 DPT=61000 SEQ=1082931877 ACK=1097398385 WINDOW=29200 RES=0x00 ACK PSH FIN URGP=0
Dec  3 11:50:51 [localhost] kernel: ABORTED IN=ens3 OUT= MAC=52:54:00:38:ef:48:9c:d6:43:ab:90:a3:08:00 SRC=207.180.213.201 DST=192.168.1.193 LEN=40 TOS=0x00 PREC=0x20 TTL=241 ID=6478 PROTO=TCP SPT=61000 DPT=80 SEQ=2892097713 ACK=467644449 WINDOW=1200 RES=0x00 RST URGP=0
Dec  3 12:15:02 [localhost] kernel: DROPPED IN= OUT=ens3 SRC=192.168.1.193 DST=51.38.239.33 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=62338 DF PROTO=TCP SPT=80 DPT=61000 SEQ=2386804186 ACK=3821347452 WINDOW=29200 RES=0x00 ACK PSH FIN URGP=0

They are just port scanning requests, definately not web-crawlers as they never request more than the head / url. And as they only request the / url I cannot use apache rewrite rules to blacklist the ip-addresses in real-time.

I have added a check for those requests into my daily batch job that scans the access logs so the ip-addresses performing those scans will still be added automatically into my drop rules for misbehaving source-ips, if a little slower than real-time.

Posted in Uncategorized | Comments Off on Hackers, PCs of infected users, or researchers ?

Globally changing URLs in WordPress posts

Over the decades my WordPress site has had a few different URLs, and I thought I had managed to keep them all up to date as I changed sites. Using owasp-zap (under Kali) to test my site it did manage to find five URLs remaining referring to an older site location.

Plus I wished to change all http:// references to https:// references as that is the way the world is moving and I am using a letsencrypt certificate now so https works. Apart from just being a good idea it will also stop all the alerts from owasp-zap about mixed content pages caused mainly by images which were embedded back before I used https, which will become an issue in the future as modern browsers are beginning to refuse to serve embedded http content from https pages.

There are a lot of WordPress plugins that promise to perform global changes; supprisingly two of the higher ranking ones just did not work at all for me.

Then I found the “Velvet Blues Update URLs” plugin. It only permits the changing of URLs, but that is all I need. And it just works.

Simply install the WordPress plugin “Velvet Blues Update URLs”, activate it. Under the Tools menu there is now an option to “Update URLs”.

Posted in personal | Comments Off on Globally changing URLs in WordPress posts

Converting a Fedora30 webserver to CentOS7

There are two supported versions of CentOS in the wild now.
Note that I exclude CentOS6 as that is pretty much end of support now. So we have only versions 7 and 8 to play with.
Version 7 is EOL in 2024, 8 is EOL in 2029.

My reason for requiring CentOS was that Docker does not run out-of-the-box on Fedora31 so I was dubious about any future support for it there, and it was safer to move to a supported OS rather than just pray that the Docker team would bother with updating their packages for Fedora.

RHEL8 (and therefore CentOS8 also) has dropped support for docker within their repositories so I had to rule out CentOS8. I may revisit that if the Docker packaging teams ever create their own upstream repository to support RHEL8.

So this post is on installing CentOS7 and the required packages I needed to migrate my webserver from F30 to CentOS7.

The conversion was relatively painless.

  • all my custom C compiled binaries simply copied across from Fedora30 and ran on CentOS7, a great benefit as I can for now keep my dev machine on Fedora
  • mysqldump of databases on Fedora loaded into CentOS7 perfectly
  • all the PHP based applications could simply have their directories (and http conf.d entries as appropriate) copied to the new server and worked with the loaded database entries… after PHP was upgraded of course

The major steps needed for the conversion are documented below.

Issues with creating the virtual machine under Fedora31

My host machines are currently both on Fedora31, the ISO I was using was CentOS-7-x86_64-Everything-1503-01.iso.

There was a rather large issue in building the VM, but I got there in the end.

  • virt-manager does not allow configuration of IDE disks
  • the CentOS7 install requires IDE disks, it could not locate disks created by virt-manager as virtio (or sata)
  • to have IDE disks available requires the emulator /usr/bin/qemu-kvm but virt-manager on f31 only uses /usr/bin/qemu-system-x86_64

The resolution to this was to manually create a VM XML file (based on a CentOS7 VM I created a looong time ago when virt-manager did allow IDE, just updating things like name, uuid, disk name, mac address etc to match values used in the failed Centos7 attempt), precreate the qcow2 disk and simply ‘virsh define xx.xml’ to define the VM.

At that point I could use virt-manager to select the cdrom as the boot device and install CentOS7 without further problems. I was also able to check that CentOS7 had all the packages available that I would need to migrate my webserve from Fedora to CentOS. Remembering of course to use ‘yum -y install epel-release’.

Upgrading PHP to 7.3

CentOS7 shipped with php5.4 which is no longer supported, and most web applications will not even run on such and old version.

php7.4 is available for CentOS7, but I chose to use 7.3 to match what was already running on my existing webserver.

Rather than rewrite existing documentation this ecternal post on installing php 7.3 on CentOS7 covers all the steps needed. The procedure worked without error… although I did one additional step before following the procedure whist was to remove all the php5.4 packages installed by default with CentOS7.

Installing Jetty

A “yum search jetty” did not find an all encompassing package that would install Jetty, and I was not keen on just installing every package beginning with jetty.

Instead I followed the documentation on installing jetty on CentOS7 from the linuxhelp site. The procedure documented there worked without any problems.

The only additional step I had to do was move my apps from /var/lib/jetty/webapps (where F30 required them) to /opt/jetty/webapps where the manual install expects Jetty apps.

The EFF Certbot

I chose to install the version of Certbot recomended for CentOS8, the best choice as I had upgraded php.

Again the documentation on installing certbot is easy to follow.

After downloading the certbot-auto file I chose perform only the “certbot-auto –install-only -v” step, which worked without issues.

I have not yet tried the “certbot-auto renew” comamnd to obtain new certificates as the ones I use are nowhere near their expiry date.

Puppet Agent

I use puppet for configuring my servers, installing the puppet agent was simply

rpm -Uvh https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
yum install puppet-agent
systemctl enable puppet

MariaDB Databases

The mariadb databases were simple to migrate as my webserver takes hourly database dumps.

On the new CentOS7 server it was simple a case of installing mariadb-server and starting the mariadb service, and running the secure setup script to get a nice clean setup.

Then sourcing (\.) the latest dump file from the Fedora30 server. This completed with no errors and all users, databases and database grants etc. were in place and working; tested by logging onto all the apps that use the database to confirm stored app userids worked and apps behaved as expected.

PHP based applications

All the PHP based applications (including wordpress) I use were able to be simply migrated across to the new server as part of the httpd directory structure copy, with the following additional steps

  • webcollab: simply required “yum install php-mbstring”
  • owncloud: “yum install php-pecl-zip php-intl php-xml”, but see notes below

Owncloud server components do not exist in OS distribution repositories. You obtain the owncloud-files package from https://download.owncloud.org/download/repositories/production/owncloud/.

As I was installing onto CentOS7 I downloaded the rpm file for CentOS8 and manually installed it (rpm -i xxx.rpm), it matched the version used on Fedora30 so should work with the existing databases once the DNS entries for the server are changed so your clients point to it.

I chose to create a completely new database and re-install as I only one sync user (myself) so I could just re-add my credentials..

Snort intrusion detection system (IDS)

This was simply a case of “yum -y install libtool libpcap-devel libdnet-devel bison flex” then recompiling the DAQ and snort binaries.

As I installed using the default configuration files (mainly to allow me to document the changes) rather than just copy across my existing configurations there was also a bit of manual customisation needed as shown below.

Assuming you have downloaded the snort and daq static source from the snort website (and installed the packages listed two paragraphs up) the steps to get snort installed are below; note that I placed the sources in /home/mark/installs/snort so change that location to the one you used.

cd /home/mark/installs/snort/daq-2.0.6
./configure
make
make install
make clean

cd /home/mark/installs/snort/snort-2.9.12
./configure --enable-sourcefire --disable-open-appid
make
make install
make clean

mkdir /etc/snort
cd /etc/snort
cp /home/mark/installs/snort/snort-2.9.12/etc/* .

And you will need to add a user and group for snort, the group snort is added automatically when the user is added with system defaults.
Simply “useradd snort”. Do that at this point as we will be changing filesystem permissions to this user in later steps.

Using the community rules (the free rule packages for snort) quite a bit of editing of the snort.conf file. As you should edit it to define a trusted home network anyway lets edit the snort.conf file.

Rather than go into details, as this post is not about installing snort, below is a “diff” between the supplied snort.conf and my customised one ( < is my changes, > is the origional line ).

< #ipvar HOME_NET any
< ipvar HOME_NET 192.168.1.0/24
> ipvar HOME_NET any
< #ipvar EXTERNAL_NET any
< ipvar EXTERNAL_NET !$HOME_NET
> ipvar EXTERNAL_NET any

< var RULE_PATH /etc/snort/community-rules
< var SO_RULE_PATH /etc/snort/so_rules
< var PREPROC_RULE_PATH /etc/snort/preproc_rules
> var RULE_PATH ../rules
> var SO_RULE_PATH ../so_rules
> var PREPROC_RULE_PATH ../preproc_rules

< var WHITE_LIST_PATH /etc/snort/community-rules
< var BLACK_LIST_PATH /etc/snort/community-rules
> var WHITE_LIST_PATH ../rules
> var BLACK_LIST_PATH ../rules

< include $RULE_PATH/community.rules
> include $RULE_PATH/local.rules

A N D -------- comment out all folowing includes after the community.rules
               entry as they are not provided with the community rules.

As mentioned above I use the community rules. To install those simply follow the below steps.


mkdir /tmp/snort
cd /tmp/snort
wget https://www.snort.org/rules/community
cd /etc/snort
tar -zxvf /tmp/snort/community
/bin/rm /tmp/snort/community
rmdir /tmp/snort

mkdir /usr/local/lib/snort_dynamicrules
touch /etc/snort/community-rules/black_list.rules
touch /etc/snort/community-rules/white_list.rules
mkdir /var/log/snort
chown snort:snort /var/log/snort

The commands in emphasis above should be placed into a weekly cron job along with a restart of snort to ensure you keep the rules up to date.

You will need to create a startup/shutdown script. The start command is (replace ens3 with your interface name)

/usr/local/bin/snort -D -A fast -b -d -i ens3 -u snort -g snort -c /etc/snort/snort.conf -l /var/log/snort -b

Note: for testing run in foreground with “/usr/local/bin/snort -A fast -b -d -i eth0 -u snort -g snort -c /etc/snort/snort.conf -l /var/log/snort -b” so all errors are written to your terminal.

Summary

Conversion from Fedora30 to CentOS7 is a fairly painless exercise. Most of the work needed I did in two days to get a fully working server. I spent an additional three days was spent on customising/testing puppet rules for this CentOS7 system (as some packages are different to Fedora), testing bacula backups/restores, penetration testing etc; basically a lot of stuff most users do not need to do.

Was the conversion sucessfull ?. You are viewing this post on the CentOS7 server, so yes :-).

Posted in Unix | Comments Off on Converting a Fedora30 webserver to CentOS7