Quick install of AWX (ansible-tower upstream) into Docker

AWX is the upstream open source project for ansible-tower. If you are interested in ansible-tower it is obviously a good starting point, as it is the starting point for ansible-tower. It is also free to use, not just free as in open source but in that you do not need a RedHat account to use it.

AWX now likes to be installed into a Kubernetes environment using awx-operator; that makes it damn hard to be workable in a home lab situation unless you want to spend a lot of time trying to work out what the installation configuration files are doing and changing them to make them usable (ingress/egress/dns etc).

It is far simpler to get a full working environment using Docker, even if the Docker method has been depreciated. This post explains how to get it installed and working using Docker. There are customisations needed to make it useful for a home lab which are covered here; as as you step though them you will understand why I chose Docker for my home lab, it would be a lot more work to make it usable under Kubernetes (even under MiniKube).

These instructions are valid as of 16th October 2021.

This post also assumes, or at least hopes, you have been using ansible from the command line quite happily already so you understand the last step on SSH keys :-).

But first, why would you want to install AWX ?. The only possible reason would be that you want to get an idea of how ansible-tower works, there is certainly no benefit for any small organisation or for your own home lab over the command line; and it is actually a real step backward for a home lab as you really need a spare machine with at least 6Gb ram running either docker or kubernetes (even to get it to run a playbook with a simple shell ‘ls’ command), plus it makes it a lot harder to do things as it expects by default all resources (EE containers, playbooks etc.) to be provided across the internet and never locally (not even local network)… although with a lot of effort you can change that and some of the steps are covered in this post.

Yes it does have user access controls, organisations, groups etc. to limit who can do what isung AWX or ansible-tower. It does not in any way stop anybody with half a clue from simply bypassing all that and using the command line.

However if you want to learn how to use ansible-tower without going anywhere near a RedHat subscription/licence AWX is the place to start.
And if that is what you are interested in then read on.

1. Follow the install instructions from the AWX github site

First get it up and running, thats simply a case of (almost) following the instructions for installing under Docker on the github site. The difference is do NOT use the “-b version” tag to select the latest version, for 19.4.0 at least that pulls down a git clone with no branch head and simply doesn’t work, so omit the “-b” tag and the instructions are simply…

git clone https://github.com/ansible/awx.git
cd awx
vi tools/docker-compose/inventory   # set pg_password,broadcast_websocket_secret,secret_key etc
dnf -y install ansible              # MUST be installed on the build host, used during the build
make docker-compose-build   # creates docker image quay.io/awx/awx_devel
make docker-compose COMPOSE_UP_OPTS=-d     # starts awx, postgresssql and redis containers -detached

Wait until everything is up and running and then do the next steps

docker exec tools_awx_1 make clean-ui ui-devel            # clean and build the UI
docker exec -ti tools_awx_1 awx_manage createsuperuser    # create the initial admin user, default of awx is recomended
                                                          # note: I ran the above twice to create a personal superuser id also
docker exec tools_awx_1 awx-manage create_preload_data    # optional: install the demo data (actually, it may already have been installed)

At the end of the installation you have a running AWX environment, but it has some major limitations we will be fixing below.

At this point you can use a web browser to port 8043 on the Docker host and test the superuser logon you created works, but there is more work to do before going any further.

2. Customisations needed

Do NOT use the “make” command to start it again from this point, or it will destroy any customisations you make to the yaml file by re-creating it.
The install creates persistent docker volumes for the data so after the install we just need a custom script to stop and start the environment rather than using the Makefile.

Simply create a script as below

cat << EOF > control_script.sh
#!/bin/bash
case "$1" in
	"start") cd awx_default
		docker-compose -f tools/docker-compose/_sources/docker-compose.yml up -d --remove-orphans
		;;
	"stop") cd awx_default
		docker-compose -f tools/docker-compose/_sources/docker-compose.yml down
		;;
	*) echo "Use start or stop"
		;;
esac
exit 0
EOF
chmod 755 control_script.sh

Note we are still using the generated yaml (yml) file, but by using the script rather than the “make” command it will not be overwritten.

As installed you cannot use the “manual” playbooks from the filesystem but only from a remote source repository (which has its own problems discussed in the next bit). You may also want to use your own in-house modules.

To use the manual playbooks you need to have a bind mount to the docker host filesystem, so “vi tools/docker-compose/_sources/docker-compose.yml“, in the volumes section add a additional entry to the existing long list as below

     - "/opt/awx_projects:/var/lib/awx/projects"

Then “sudo mkdir /opt/awx_projects;sudo chmod 777 /opt/awx_projects”. Ok, you would probably chown to whatever userid is assigned to UID 1000 rather than set 777 but whatever works.

After restarting the containers you can now place manual playbooks in the /opt/awx_projects directory on the Docker host and AWX will be able to find and use them from the containers /var/lib/awx/projects directory. But don’t restart the containers yet :-).

Also under the projects directory you would normally have directories for individual projects to contain the playbooks for that project. If under each of those project directories you create a new directory named /library you can place any customised or user written modules you want to run in those playbooks (ie: if you have projects/debian and projects/centos you need /projects/debian/library and projects/centos/library; it is within the individual projects levels you create the library directory not the top level projects directory level for AWX). But worth noting as that is the easiest way of implementing your own custom modules.

The second issue is the resolution of host names; it will work fine if you only want playbooks from internet source repositories and only want to manage machines that are resolvable by the world wide web DNS servers but what if you want to use local docker and source management repositories, and more importantly you need to resolve host names of machines on your internal network if you want AWX to manage them.

On the local docker image repository side; at this point AWX will not use insecure private repositories which is a bit of a pain. However to resolve your own internal hostnames you need to reconfigure Docker to use a DNS that can resolve those hostnames. That is simple in Docker, simply “vi /etc/docker/daemon.json” and insert something like the below

{
  "insecure-registries" : [ "docker-local:5000" ],
  "dns" : [ "192.168.1.181" , "192.168.1.1" , "8.8.8.8" ]
}

In the DNS list above is one of my DNS servers (I use dnsmasq) that can resolve all my server hostnames, my router and google DNS to resolve all the external sites. Customise the list for your environment.

Now at this point you will want to restart docker (to pick up the daemon.json changes) and the containers (for the new volume bind mapping), so assuming you created the control_script.sh above.

./control_script.sh stop
systemctl stop docker
systemctl start docker
./control_script.sh start

When the containers are running again “docker exec -ti tools_awx_1 /bin/sh” and try pinging one or more of your local servers to ensure name resolution is working.

3. SSH keys, credentials

If you have been using ansible normally from the command line before you will be aware that part of the implementation is to create SSH keys, the private key on the ansible ‘master’ server and the public key placed on every server to be managed. The good news is that you can just create a “credential” in AWX for the ansible user and paste that same private key into it.

When setting up the command line ansible you had to ssh to every server to be managed from the ‘master’ and reply Y to the ssh prompt; AWX handles that prompt when it tries to run a job on any server that hasn’t been done for yet which is the only time saver over command line. Having said that you still need the public key on every server to be managed so you need to setup a playbook to do that anyway (personally I use puppet for such things so the keys already existed on all my servers).

So for hosts to manage add the new host to an inventory (maybe create some custom inventories first) and in AWX go to inventories->hosts->runcommand [the run command option is only available if you go into an inventory and list the hosts, not from the hosts page] and select the ‘shell’ module and a simple command such as ‘ls’ to make sure it works (use the default awx-EE and the credentials you added the SSH key to). It should just work… although see below.

4. Performance compared to simple command line

The latest versions of AWX use “execution environments” to run commands. This involves spinning up a new container to run each command.

Great for isolation, bad for performance. A simple shell module “ls -la” command from the command line is done in seconds, from AWX it takes few minutes to do the same simple command as it needs to start an execution environment if one is not running (and even download it if there is a later version container image; tip after the first pull of the contianer image change pull to never).

Inventories, ansible hosts files

If you have been using ansible from the command line you probably have quite a few, or even one large ansible hosts file with lots of nice carefully worked out groupings.

This may be a time to split them out into seperate inventories if you have not already done so, for example a hosts file for debian, one for centos, one for rocky, one for fedora etc. depending on how many entries you want to show up in a GUI display of an inventory.

Or you could just create one inventory and store the entire hosts file into that one inventory. By store of course I mean import.

Copy your current ansible hosts file from /etc/ansible (or as many different hosts files as you have to load into seperate inventories) to the awx_tools_1 container. Create an inventory in AWX for each hosts file, and use awx-manage to import the file. The invetory must be pre-created in AWX before trying to import.

For example I created an inventory called internal_machines, did a quick “docker exec -to awx_tools_1 /bin/sh”, in the container a cd to /var/tmp and “vi xxx”; I did a cat of my existing ansible hosts file with all its hosts and groupings and pasted it into the “vi xxx” file on the container session and saved it. Then simply…

awx-manage inventory_import --inventory-name internal_machines --source xxx

My ansible hosts file was in INI format (lots of [] sections rather than json or yaml), the import command will add all the hosts and groups to the inventory (and the hosts to the hosts enries of course).

It is important to note that you should have run a test job (such as an ‘ls -la’ as mentioned above) against a server you have manually added first in order to start at least one execution environment container first; otherwise the import job will time-out waiting for the container to start and fail to import anything.

Issues

The default execution environment awx-ee:latest seems to want to download for every job even though I have the EE settings to only pull if missing. A major issue with this is that often a job will fail with being unable to pull it from quay.io ‘with failed to read blob’, repeatedly retrying will eventually work. The real major major issue is not being able to use a local insecure registry to store a local copy (and even create custom EE images) for local use which would immediately alleviate it as a major issue. And EE’s don’t seem to share images even though they are identical images, for example currently jobs run on the control-plane EE using awx-ee:latest with no problems but the AWX-EE EE using awx-ee:latest is trying to download it again.

Basically unless it becomes really easy to decouple from the internet and use local container image repositories, local git sources, local playbooks etc. its pretty useless unless you are happy with a 2Gb download every time you want to run a simple “ls” command. Expect the same issues with Ansible-Tower which wants to use RedHat repositories, you need to decouple or at least make your local sources of everything the first place searched.

Irritations

It is easy to mess up groupings when trying to manually add them, and if you do a job that runs and ends with “no matching hosts found” that job is marked as a sucessfull job which I personally thing should be a failure.
Under both “schedules” and “management jobs” only the supplied maintenance jobs are available and there does’t seem to be an easy way to add user schedules, therefore it seems pointless to go to all this work when for automation the command line interface run via cron provides better automation.

AWX and it’s downstream ansible-tower provide good audit trails on what defined to the tower application user changed what and ran what within the tower but there is of course no auditing within the tower of who changed what git playbooks (although git will record that) and of course manual playbooks on the filesystem are by their nature unaudited. Sooo, auditing is not really a good reason to use it. Also it has a large overhead, needs a minimum of 2cpus and 4gb of memry (AWX does, ansible-tower needs more) just to run.

And how to import custom modules I’m still looking for as far as AWX goes; easy to use custom modules from the command line. As mentioned earlier in the post you can create a ‘library’ directory for each project, you could also inject into the docker run command the ansible environment variable to specify library search path and create another volume bind to that directory on your host, but really you would want your modules in the same git repository as your playbooks… but as this post has shown how to setup project playbooks on the local filesystem I suppose it’s not really relevant to this post.

Of course AWX/Tower allows creation of individual users/groups/projects/organisations… just like a well laid out filesystem structure and use of the unix users and groups files. Of course reporting on such a layout seems a bit tricky in AWX where on a filesystem you could just use “tree” to display the projects structure and contents.

Far easier to run ansible from the command line using filesystem based playbooks (my filesystem playbooks are git managed on the host so no advantage in using AWX or ansible-tower there) which has absolutely no overhead requirements other than whatever memory a terminal or ssh session takes. Plus no requirement on Docker or Kubernetes (no execution environment containers), just small fast and simple and if you want regular jobs scheduled by cron.

However if I can one day get insecure registries working for EE images, and figure out how to get playbooks and custom modules deployed from my local gitlab server (rather than having to use the filesystem volume mount) it may be useful; and I might even update this post with how. However by useful I mean as a learning experience, my home lab doesn’t have a spare 2Gb memory I can dedicate to actually using it when I find puppetserver so much more powerfull.

Posted in Automation, Unix | Comments Off on Quick install of AWX (ansible-tower upstream) into Docker

Differences between Docker and Kubernetes from a container viewpoint

This post is not about the differences between Docker and Kubernetes; it is about the differences needed for containers. And why you should not expect any container from dockerhub.io to run under Kubernetes.

The main difference discussed here is that

  • Docker and Docker Swarm run containers from a ‘root’ environment by default
  • Kubernetes containers run under the user that started Kubernetes or as defined by the RunasUser and RunasGroup parameters in the deployment yaml file

For Docker this allows complex applications with components running under multiple users to be run from a container, as the container sartup script is run as root the startup script can ‘su’ down to each of the neeed users to start those components. As long as any UID needed for bind mounts is documented that works well.

The same startup logic cannot be used for Kubernetes containers. The container startup script must be owned and executable by the RunasUser/RunasGroup pair and as a general rule all application components would be started by that one user. This helps in keeping containers designed for one purpose as simple and as small as possible (microservice) but makes running an entire application with more than one component in a container difficult.

Many container images designed for docker require to be run in a root environment. Should you try to run an image that has the startup script secured to root or user X with a RunasUser of Y under kubernetes the pod will just go into CrashLoopBackOff status because of permission denied on the script.

It is certanly possible to design containers to run under both environments. What makes it difficult is that there is no environment variable set for a container by docker of kubernetes to let the container know where it is running; not a major issue as you can simply provide your own.

From personal experience with my own small containers designed for docker that I wanted to move only used ‘su’ to move from root to the user I wanted the app to run under, the conversion to support both is simply to pass a custom environment variable to the container if running under kubernetes and if the variable exists assume the RunasUser/RunasGroup were set correctly and just start the app; if not present assume docker and ‘su’ down to the expected user to start the app. Actually as I was writing this post I just thought an easier way would be to simply check if the script is running as root and determine the environment that way, which I am going to use instead :-).

It is certainly possible to start apps under multiple users under Kubernetes; you would simply add ‘sudoers’ entries for the RunasUser to allow it to run the startup commands under the correct user. But if the container is designed to run under either engine would have sudoer entries in the container not required by docker and containers are audited these days :-).

For me personally I only want to build one container per app and have it run under either.

If you are relying on external container images just remember images on dockerhub.io are for Docker, don’t expect them to even start on Kubernetes. Images built explicitly for Kubernetes are not intended to be run as root so should not be run under Docker.

If you are developing your own containers and use both engines, design the container to be able to run under both. All that is needed is the awareness of the fact Docker will run startup scripts as root and Kubernetes prefers not to.

This post is primarily because containers I was moving from docker to minikube were going into CrashLoopBackOff and after sorting that out thought it may be of interest to others as it doesn’t seem to be highlighted anywhere.

Posted in Automation, Unix | Comments Off on Differences between Docker and Kubernetes from a container viewpoint

So you want to play with Kubernetes, try MiniKube

First off, if you have a few spare physical machines; or machines with enough resources to run a few well resourced VMs it is fairly simple to install Kubernetes itself. I have a fairly old OpenStack stack deployment yaml file that I still use to throw-up/tear-down multicompute node environments under OpenStack if I need something more powerful than MiniKube, but MiniKube is all you really need for development/testing on a regular basis.

However this post is on minikube. MiniKube is the best tool for testing out Kubernetes for a home lab if you are comfortable running everything on one machine if you have one powerful enough. Minikube provides a working environment including mutiple nodes (or course on the local machine) if required.

What is a powerful enough machine is a matter of debate; for example to test istio it is recomended to use 6 cpus and 8Gb of memory; I had no trouble with 2 cpus and a physical machine with only 6Gb of memory and only a wireless interface for downloads, running all examples and kiali (it was slow, bit everything worked).

As a general rule you should probably allocate as much resource as you can, especially as minikube can run multiple nodes if you wish to by simply passing a command line flag to the start command.

One important thing to note about this post. I run minikube on a machine running docker using the docker driver. I strongly recomend you do the same so you can use docker commands to manage images in the cluster as discussed in some of the tips and tricks later on.

This post is about a few of the tips and tricks I have picked up using it.

For things I am testing (or even keeping) I prefer to keep them in their individual versioned directories where possible; for that reason I skip the steps some installers want of copying things to /usr/local/bin as you would only do that if you wanted every user on your machine to use them plus do not want aliases in your global profile. One advantage is that you can easily have multiple versions and just update the aliases.

Installing MiniKube and additional components

We will start off with configuring it to be useful. Note that I install everything under a directory ~/installs/kubernetes; you can place it in any directory of you choice.

# --- I keep everything under one directory and use aliases to run them
INST_DIR="~/installs/kuberernetes"
mkdir -p ${INST_DIR}
# --- get minikube
mkdir -p ${INST_DIR}/kuberernetes/minikube
cd ${INST_DIR}/kuberernetes/minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
# --- get istio
mkdir -p ${INST_DIR}/kuberernetes/istio
cd ${INST_DIR}/kuberernetes/istio
#     check https://github.com/istio/istio/releases for the latest release available ---
#     I am using 1.10.1 which is the latest at the time I write this
wget https://github.com/istio/istio/releases/download/1.10.1/istio-1.10.1-linux-amd64.tar.gz
tar -zxvf istio-1.10.1-linux-amd64.tar.gz
/bin/rm istio-1.10.1-linux-amd64.tar.gz

For things I am testing (or even keeping) I prefer to keep them in their individual versioned directories where possible; for that reason I skip the steps some installers want of copying things to /usr/local/bin as you would only do that if you wanted every user on your machine to use them plus do not want aliases in your global profile.

I just refer to the commands by aliases. So add the below lines to your ~/,bashrc (if using bash) or the profile file of whatever shell you use. Note the alias entry for ‘kubectl’, most documentation will recomend you download the latest copy of kubectl but as minikube has its own copy built in which is at the correct version for minikube you should use that copy, an example is the last of the three aliases shown below allowing the command ‘kubectl’ to be used at te terminal so copy/paste from websites you are interested in will work.

alias minikube="/home/mark/installs/kubernetes/minikube/minikube-linux-amd64"
alias istioctl="/home/mark/installs/kubernetes/istio/istio-1.10.1/bin/istioctl"
alias kubectl='minikube kubectl --'

Right, we are ready to start things up. Remember to ‘source ~/.bashrc’ (or start a new shell)

cd ~
minikube start --cpus 6 --memory 8192

At this point just occasionally use the command ‘kubectl get pod -A’. Wait until all pods are running before continuing.

Then you want istio installed

istioctl install

At this point just occasionally use the command ‘kubectl get pod -A’. Wait until all pods are running before continuing.

Lets add some of the whizzy-bang tools you will want to play with to monitor/visualize what you deploy now

kubectl apply -f istio/istio-1.10.1/samples/addons/grafana.yaml
kubectl apply -f istio/istio-1.10.1/samples/addons/jaeger.yaml
kubectl apply -f istio/istio-1.10.1/samples/addons/kiali.yaml
kubectl apply -f istio/istio-1.10.1/samples/addons/prometheus.yaml

For istio to be injected into pods you must set a label on each namespace you want istio used in, for playing about you will probably use the ‘default’ namespace so enter

kubectl label namespace default istio-injection=enabled

At this point you will probably want to test some of your own deployments. One additional tool I would suggest is a very strict kubernetes yaml file checker. That can be installed into its own directory and aliased as were the other commands

mkdir -p ~/installs/kuberernetes/kube-score
cd ~/installs/kuberernetes/kube-score
# ---- check https://github.com/zegl/kube-score/releases for the latest release available ---
wget https://github.com/zegl/kube-score/releases/download/v1.11.0/kube-score_1.11.0_linux_amd64.tar.gz
tar -zxvf kube-score_1.11.0_linux_amd64.tar.gz
alias kube-score="/home/mark/installs/kubernetes/kube-score/kube-score"   # << and add to ~/.bashrc with the other aliases
# usage kube-score score xxxx.yaml

Loading images into MikiKube

Now, you may want to use a local docker registry for images; good luck with that !.

There probably is a way to tell minikube to lookup local dns, its internal dns is perfectly able to resolve the internet addresses needed to download the images it needs to run, but it ignores the local host /etc/hsosts file and dns settings by default. Even if it could be overridden most 'local' docker registries are insecure so could not be used easily anyway.

However this is where the benefits of running minikube on a machine running docker come into play.

MiniKube has a 'minikube load xxx.tar' command where you can load into the cluster images you can manually save from your local docker repository and copy across to the machine running minikube to load; as an example (same machine running docker and minikube using that docker as the driver).

[mark@hawk ~]$ docker image list
REPOSITORY                       TAG       IMAGE ID       CREATED        SIZE
gcr.io/k8s-minikube/kicbase      v0.0.23   9fce26cb202e   10 days ago    1.09GB
docker-local:5000/portainer-ce   latest    96a1c6cc3d15   4 months ago   209MB
portainer/portainer-ce           latest    96a1c6cc3d15   4 months ago   209MB
localhost/mvs38j                 latest    1df77f61cbed   6 months ago   787MB
[mark@hawk ~]$ docker image save localhost/mvs38j > mvs38j.tar      # <-- save from docker 
[mark@hawk ~]$ minikube image load mvs38j.tar                       # <-- load to minikube

Important: a image loaded with 'minikube load xxx.tar' will not be shown with a 'minikube image ls' command. It is available and will be used by your containers, the pod logs will show 'image already present on local machine' when the pod starts; it seems to be invisible in cache until then.

However if your machine runs docker you can easily switch it from managing the machines docker instance to the kubernetes docker instance with the simple command 'eval $(minikube docker-env)' which allows you to use normal docker commands directly against the image within the minikube cluster as shown below where I switch the environment.

[mark@hawk ~]$ docker image list                  # <--- local machine, not many
REPOSITORY                       TAG       IMAGE ID       CREATED        SIZE
gcr.io/k8s-minikube/kicbase      v0.0.23   9fce26cb202e   10 days ago    1.09GB
docker-local:5000/portainer-ce   latest    96a1c6cc3d15   4 months ago   209MB
portainer/portainer-ce           latest    96a1c6cc3d15   4 months ago   209MB
localhost/mvs38j                 latest    1df77f61cbed   6 months ago   787MB
[mark@hawk ~]$ 
[mark@hawk ~]$ eval $(minikube docker-env)        # <--- switch to minikube environment
[mark@hawk ~]$ docker image list                  # <--- and we see lots of images
REPOSITORY                                TAG        IMAGE ID       CREATED         SIZE
istio/proxyv2                             1.10.1     5c66e8ac89a7   2 weeks ago     282MB
istio/pilot                               1.10.1     07d6b563f74b   2 weeks ago     217MB
quay.io/kiali/kiali                       v1.34      1d3ab1649f0b   5 weeks ago     194MB
k8s.gcr.io/kube-proxy                     v1.20.7    ff54c88b8ecf   5 weeks ago     118MB
k8s.gcr.io/kube-apiserver                 v1.20.7    034671b24f0f   5 weeks ago     122MB
k8s.gcr.io/kube-controller-manager        v1.20.7    22d1a2072ec7   5 weeks ago     116MB
k8s.gcr.io/kube-scheduler                 v1.20.7    38f903b54010   5 weeks ago     47.3MB
gcr.io/k8s-minikube/storage-provisioner   v5         6e38f40d628d   2 months ago    31.5MB
grafana/grafana                           7.4.3      c9e576dccd68   3 months ago    198MB
jimmidyson/configmap-reload               v0.5.0     d771cc9785a1   4 months ago    9.99MB
prom/prometheus                           v2.24.0    53fd5ed1cd48   5 months ago    173MB
localhost/mvs38j                          latest     1df77f61cbed   6 months ago    787MB
kubernetesui/dashboard                    v2.1.0     9a07b5b4bfac   6 months ago    226MB
jaegertracing/all-in-one                  1.20       84b5c715abd0   8 months ago    45.7MB
k8s.gcr.io/etcd                           3.4.13-0   0369cf4303ff   9 months ago    253MB
k8s.gcr.io/coredns                        1.7.0      bfe3a36ebd25   12 months ago   45.2MB
kubernetesui/metrics-scraper              v1.0.4     86262685d9ab   14 months ago   36.9MB
k8s.gcr.io/pause                          3.2        80d28bedfe5d   16 months ago   683kB
[mark@hawk ~]$ 

You can use ordinary docker commands against images within the minikube kubernetes cluster at this point; for example 'docker image rm 83e6a8464b84' will remove the image; although you should probably use 'minikube image rm' and just use docker to check.

Important notes

Do not expect docker images you download from dockerhub to run under kubernetes without modification. There are design issues to take into consideration, personally all my containers get an environment variable passed to them to indicate which application startup login chain to take. You may be able to get them to run if you set the kubernetes parameters for the container runasuser/runasgroup to 0 (if kubernetes allows such a thing) but that's obviously not ideal.

So create your own containers, or stick to kubernetes repositories not dockerhub ones until you know how to customise them.

Cleaning it all up again

To remove everything again, another benefit of keeping everything under its own directory structure is how easy it is to remove.

  • 'minikube stop' - shuts everything down in a state it can be restarted. It can be restarted from this state without losing any of your work with another 'minikube start --cpus 6 --memory 8192'
  • 'minikube delete' - use only when stopped, will delete everything you have done from minikube, you must start again from scratch
  • rm -rf the directory you installed all the downloads into, plus 'rm -rf ~/.minikube' as a lot of stuff is stored under your home directory in that folder
Posted in Automation, Unix | Comments Off on So you want to play with Kubernetes, try MiniKube

Setting up a CI pipeline on Linux for your home lab

Setting up a continuous integration pipeline for your home lab may seem like overkill, but if you have the memory available it is simple to do and if you have an interest in these tools why not :-). And it is so simple to do you should.

I must warn in advance that this post was done in a bit of a hurry as I have other things to do; bills to pay. So while I have tried to cover everything needed I will have missed bits. One example would be on how to tweak it so podman users can use the solution, I don’t use podman unless absolutely necessary so is you are using podman tough.

Requirements: You can have a VM instance with docker, a local container registry, a running gitlab-ce server, and the jenkins CI pipeline all installed on a VM with as little as 2vcpus and 8Gb memory. However note that if you want to integrate with kubernetes you should place that elsewhere as you really need another 4Gb memory minimum added to use that (even for minikube if you want to use kubevirt).

For those that are familiar with git every project lives in its own directory, that will not change if you run a local git server. You can be a little cleaner in that projects you are managing by git that you are not actively working on can be ‘pushed’ to your local git server and the local work directories removed (you can always pull them back from your local git server when you want to work on them again). But the only reason you would need a local git server is if you wanted to play with CI and don’t like the idea of updating a public source repository with multiple changes a day.

On the container front you should if you build images use the standard docker registry2 to provide a local insecure registry. This allows you to build container images from known ‘tested’ base images stored locally. Using the images from dockerhub means you cannot guarantee the build done today matches the build you did yesterday as base images on dockerhub are updated frequently; you need a local static base image.

Now with everything available in containers these days it all should be simple. And with a bit of simple customisation it is.

This post is about how to simply setup a CI pipeline in a home lab on a single VM. This post will cover the basic steps for

  1. Installing docker
  2. Setting up a local insecure container registry
  3. Installing and configuring the gitlab-ce container
  4. Installing and configuring the Jenkins container

Update: 2021/07/17 One warning: using containers ties you to that version; for example the gitlab-ce container jumped a major version and I cannot uograde without losing all my work. So a lot of effort to do if you update a container image later.

You do not have to install them all on a single VM, the benefit of doing so is simply that like containers being able to be moved about at will if your entice pipeline environment is on a single VM you can also easily move that about… if you have enough memory, a single VM install should work well with a VM using 7Gb memory and 2vcpus (allow 8Gb memory for headroom); if you don’t have that amount of memory available on a single machine by all means run the containers anywhere, the steps are pretty much the same. This post covers installing on a single VM, as I do move it about :-).

As the entire environment can be setup in containers on a single VM there is minimal effort needed to get a full CI pipeline for a home lab. However it should be noted that additional VMs (or additional packages added to the docker host allowing it to be more than just a docker build host) may be required.

For example if you wanted a pipeline to build a container using docker you could use the VM running jenkins as a build node, but if you wanted a pipeline to compile C code you would either need to install all the compilers on the VM, or simply add a new Jenkins node to another VM that had all those tools and configure your jenkins pipeline to use that node for that step.

This post will not cover the details of creating complex pipelines, although a few examples will be provided.

A couple of references you will see a lot that you will need to change in any example command shown are hostname entries

  • docker-local – my dns entry for the docker registry host server running the registry container, change it to the servername or ipaddr of your VM
  • gitlab-local – my dns entry for the gitlab-ce host server running that container, it must be a different name to that used to refer to the server (discussed later) [for this post the ip of the VM]
  • gitlab-server – my dns entry for the VM host [for this post the ip of the VM]

For the purposes of this post as I am discussing installing on one VM the above two hostname references would point to the same ip address, but they provide two different functions and should have their own dns entries in case you want to split them up onto seperate hosts later.

1. Installing docker-ce

Installing docker onto each different type of OS is fully covered on the docker website itself, you can find the instructions simply with a google search on “install docker-ce ” for example “install docker-ce centos”, the main page with all supported OSs listed is at https://docs.docker.com/engine/install/.

As you could be running any OS I won’t cover the install here, other than to point out additional steps that will be needed.

One note for Fedora users, the issue with cgroups2 being enabled on fedora by default is no longer an issue; docker-ce will now install and run on fedora without any OS settings needing to be changed.

On the assumption you will be using your CI pipeline to build container images you will not need docker or podman installed on any development desktops, just git to push your development builds into your pipeline. However if you are building containers on desktop for testing note that you would need to un-install podman (if installed) and install docker on all your development machines; there is no easy way of allowing podman to use an insecure registry.

After installing docker-ce into your pipeline VM…

  • ensure there is a ‘docker’ group created in /etc/group; if not create one
  • add userids that will need to use docker to the docker group to avoid the need to use root to do anything with docker (and I can’t remember the commands, I just ‘vi /etc/group’ and add them, bad I know). The group entry will look something like “docker:x:979:root,mark,anotheruser,anotheruser”
  • To allow any desktop/server you have just installed docker on to use an insecure registry ‘vi /etc/docker/daemon.json’ (if it does not exist create it) and add entries such as the below example
    {
      "insecure-registries" : [ "docker-local:5000","localhost:5000" ]
    }
    {
        "dns": ["192.168.1.179", "192.168.1.1", "8.8.8.8"]
    }
    

    If you are not using a local DNS server for your home lab that can resolve the servername you should use the ip-address in the insecure-registries entry such as “192.169.1.170:5000” rather than a servername such as docker-local. The :5000 is the port your docker container regisytry will listen on. If you are using a local DNS server then ensure you add the dns entry witk your local dns server and router in the list first also as if not set here docker will only use 8.8.8.8 and never resolve any local server names

The reason this post covers installing docker and omits any mention of podman is simply because the configuration file mentioned above in /etc/docker is of course for docker only; podman will not use an insecure registry.

One additional note for docker. By default deleting images using ‘docker image rm xxxxx’ on a machine running docker is not going to return any space. You should occasionally use the command “docker system prune -a” which will delete everything not in use on the client (anything not currently being used by a container) when issueing that command… repeat everything not being used by a container, make sure images you may want to use again are pushed to your local registry if they are not in use by a container when you run this command.

One additional tool I would recomend you install if you use docker is “docker-squash”. Iy you build your own container this is invaluable as it can squash dozens of layers of an image down to just a few and easlily halve the size of a container image; also I would generally run this against images downloaded from dockerhub as well as it can easily decrease the size of most of them as well. This can be installed with “pip install docker-squash” (or pip3/pip2 rather than pip if pip does not not default to the desired python version you have installed). For those that develop with podman rather than docker you will be upset to learn I am not aware of any equivalent tool for podman, which may be why many images on dockerhub are so needlessly large.

2. Setting up a local insecure container registry

First, this is easy to do. Secondly this is a long section as it also covers in brief some maintenance of your local registry you need to know that is not obvious or even mentioned in many posts about using a local registry2 instance.

For a home lab all that is needed is an insecure registry, this avoids the need to manage certificates for the registry. It allows every development server (that you manage) on your local network full access to the registry using docker if you set the insecure-registries section on each docker client machine as mentioned in the previous section.

(If using ‘podman’ tough luck, you need to look at the documentation on setting up a secure registry with valid letsencrypt certs, podman may be useful for stand-alone development on an internet facing machine using public repositores but is a pain to use on an insecure registry (maybe setup a local CA and use stunnel ?, or using an apache proxy as discussed later on in this section if you have valid certs for your webserver may work).

While the term “insecure registry” is used remember that each docker client machine using the registry must be explicitly configured to allow the insecure-registries, any client not setup to do so wil get an error like ‘http response where https expected’ whenever they try to use the registry, so access is still limited to clients configured for it.

The default port for an insecure registry is 5000 so on the server you will be running your local container registry on open port 5000 in the firewall. Then simply run the command below which will pull down the registry container and run it; and thats it you have a working local container registry. Read a few details below before running the command however !.

# A non-secure registry for local use
# The container will use /home/docker/data for storage rather than the default
# of an internal docker persistent volume.
docker run -d \
  -p 5000:5000 \
  --restart=always \
  --name registry \
  -v /home/docker/data:/var/lib/registry \
  -e REGISTRY_STORAGE_DELETE_ENABLED="true" \
  registry:2

In the regsistry start command above you will notice I included a parameter permitting images to be deleted, if you do not include that parameter you will never be able to delete images and the space used will eventually become extrememly large. Even with the parameter set deleting images is a multi-step process, you can delete as many images as you want and find zero space is reclaimed as they are just marked as deleted until you run the registry garbage collector to actually reclaim the space.

On space usage, if you exec into the registry container and use ‘df -k’ to see filesystem sizes you will see it uses all the space available on the filesystem. That may be something you do not want. You can limit the maximum size of space used by containers by modifying the docker config in /etc/sysconfig/docker-storage to add the line “DOCKER_STORAGE_OPTIONS= – -storage-opt dm.basesize=20G” however note this affects all containers you will start. It also needs to be done before you first start docker or you will have to delete and recreate all your existing containers.

On space again, well data really. Remember that containers do not persist storage across container restarts, as such you will notice that the example run command above uses a bind mount to a local filesystem directory; as such the storage options parameter is completely ignored for that filesystem as it gets the hosts freespace size, so why do it you ask ?. The reason is that by using a bind mount to the hosts filesystem the data remains when the container is stopped, and on a container restart using the same bind mount all the data is available for the container again. You would not want to push lots of images to your local registry and have them disappear every time you stopped the container; it does tie the container to the host with the filesystem so suitable for standalone docker not a docker swarm. It also allows the data to be backed up by your normal backup procedures.

Once you have decided what customisation you want to make, you can use the run command to start the registry containter to provide your local container registry.

To use the registry simply tag the container images with your local registry ip-address of servername and port 5000, for example my servername is docker-local so in order to store an image in my local registry you would tag it as in the example below for the ‘alpine’ image

docker pull alpine:latest                       # retireve an image (or use one you built yourself)
docker image list                               # you need the image id rather than the name
docker tag &ltimageidstring&gt docker-local:5000/alpine:latest    # tag to your local repository
docker push docker-local:5000/alpine:latest     # push to your local repository
docker image rm alpine:latest                   # untag the dockerhub entry

In the docker run command you would use the image name of docker-local:5000/alpine:latest to use the local copy instead of the dockerhub copy. Why would you want to do this ?, the answer is that you can guarantee each container you run uses a ‘known’ copy/version of the container image and it not going to be accidentally refreshed by a newer copy from dockerhub; so you can be sure each rebuild of a container in your CI pipeline is from a known base (and you you should refresh occasionally to get any security fixes… and retest everything again in your pipeline before depoying anything :-)). Another benefit of course is re-pulling images as needed from your local repository is across your local network, you don’t chew up internet bandwidth you may be paying for.

Should you have wisely used the parameter to allow images to be deleted from the registry in order to delete images you need to use the API, as that can get complicated I would recomend using a tool such as “git clone https://github.com/byrnedo/docker-reg-tool.git” which can then manage the regsitry using the command “INSECURE_REGISTRY=true ./docker_reg_tool http://docker-local:5000 [actions]” as the easiest way of listing and deleting images.

While it is fairly simple to view the contents of the local registry simply using a web browser with

see whats in the catalog
    http://docker-local:5000/v2/_catalog
    http://docker-local:5000/v2/_catalog?n=200       # default is max 100 entries to display , can change it with ?n=nn
assume one of the entries returned was 'ircserver' to see all available tags in the catalog for that
   http://docker-local:5000/v2/ircserver/tags/list

however if you have a lot of entries you are processing in a script using something like curl to get the list be aware the response may contain a link to a next page of responses your script would have to handle as well.

Deleting gets tricky and using a tool such as the one I refered to above makes it easier than trying to so it via creating your own custom scripts to post http delete requests. To do it yourself you would need to inspect the image you wish to delete to obtain the digest (ie: docker inspect my.docker.registry.com:5000/ubuntu:latest would in the response somewhere return something like RepoDigests”: [“docker-local:5000/ubuntu@sha256:74a1b5f5c5d771cf3570fa0f050e0c827538b7fe1873bc88b85d56818df3f2bc”]) and issue the delete with using that information (ie: curl -vk -X DELETE https://docker-local:5000/v2/mytestdockerrepo/manifests/sha256:66675d81b9bd5eafc105832b78abb91cab975bbcf028ca4bce4afe73f66914ee)… basically it is much easier to use an existing tool like the one mentioned earlier.

As I mentioned earlier issuing delete requests for images in your registry will not actually reclaim any space. In order to actually reclaim space used by the deleted images you need to exec into the registry2 container and run the garbage collector

docker exec -ti registry2 /bin/bash    # replace registry2 with what you named your register container
bin/registry garbage-collect [--dry-run] /etc/docker/registry/config.yml
exit

There may on occasion be a need for you to allow external users to “pull” containers from your local registry, which as they will not be configured to treat your registry as insecure will obviously get the ‘http response where https expected’ if they try to use it directly. This is easily fixed if you also run a website which has valid https certificates by simply proxying from your websites https site to the local container registry server, the external clients will then have a https connection and be able to pull from your registry.

Please note that I only cover “pull” by external clients here; personally I don’t see any reason why external clients should be pushing to your local registry so the example here denies the ‘push’ command. The example below should be added to a new conf file in your websites /etc/httpd/conf.d/ directory and will allow external clients to pull from your local repostory using the url you choose. I named my file /etc/httpd/conf.d/docker-registry.conf.

And you will notice I define a ‘push’ user; you really should not do that but an example should be complete :-). But if you are collaborating on products with users outside your local network you should be using a private project on one of the hosted git environments out there as they will (should) provide better security than anything covered in this port.

However with the configuration example below users can download from your private registry with commands such as “docker pull yourdomain.com:5043/imagename”. There will be no issues with complaints about ‘http responce when https expected’ as your webserver will be providing the https transport layer. Assuming you opened port 5043 on your server firewall and router port forwarding list of course. (tip: you can use your router to forward requests from port 5000 to 5043 (or any internal port) so treat 5043 as an example, not a recomendation).

# To create a push user...
# htpasswd -Bbn mark password >> /var/www/html/data/auth/httpd.htpasswd
# echo "pusher: mark" >> /var/www/html/data/auth/httpd.groups"
LoadModule headers_module modules/mod_headers.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule access_compat_module modules/mod_access_compat.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule unixd_module modules/mod_unixd.soListen 5043
<VirtualHost *:5043>
   ServerName your.websitename.org
   SSLEngine on
   SSLCertificateFile /etc/letsencrypt/live/your.websitename.org/fullchain.pem
   SSLCertificateKeyFile /etc/letsencrypt/live/your.websitename.org/privkey.pem
   SSLCertificateChainFile /etc/letsencrypt/live/your.websitename.org/fullchain.pem
   SSLCompression off
   SSLProtocol all -SSLv2 -SSLv3 -TLSv1
   SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
   SSLHonorCipherOrder on
   Header always set "Docker-Distribution-Api-Version" "registry/2.0"
   Header onsuccess set "Docker-Distribution-Api-Version" "registry/2.0"
   RequestHeader set X-Forwarded-Proto "https"
   ProxyRequests off
   ProxyPreserveHost on
   # no proxy for /error/ (Apache HTTPd errors messages)
   ProxyPass /error/ !
   ProxyPass /v2 http://docker-local:5000/v2
   ProxyPassReverse /v2 http://docker-local:5000/v2
   <Location /v2>
      Order deny,allow
      Allow from all
      # match realm to the 'basic-realm' used by default in the registry container
      AuthName "basic-realm"
      AuthType basic
      AuthUserFile "/var/www/html/data/auth/httpd.htpasswd"
      AuthGroupFile "/var/www/html/data/auth/httpd.groups"
      # Read access to any users
      <Limit GET HEAD>
         Require all granted
      </Limit>
      # Write access to docker-deployer only
      <Limit POST PUT DELETE PATCH>
         Require group pusher
      </Limit>
   </Location>
</VirtualHost>

Installing and configuring the gitlab-ce container

This is also extremely simple as it comes bundled in a container. The only things to note are as everything is bundled into it, as such it is cpu and memory hungry, allow 3Gb of memory for this container alone.

It is also important that your local DNS (or /etc/hosts file) contain a seperate hostname to be used when accessing the gitlab-ce container as you do not want to use the servers hostname, this is important. Why it is important is discussed a little further on when discussing configuring ssh to treat gitlab-local references differently to normal ssh sessions. Along the lines of my using docker-local in the examples I will call the server gitlab-server and use gitlab-local for referencing the ip-address when refering to the gitlab-ce instance so I would have a /etc/hosts entry similar to the below.

192.168.1.nnn gitlab-server gitlab-local

The container image also benefits from going through ‘docker-squash’ as the image size can be decreased quite a lot, so ideally ‘docker pull’ the image and squash it before use; you can tag it as your own local registry copy and push that to your local regsitry so you are always working from a ‘known copy’; although having said that gitlab does release a lot of security fixes so you will want to periodically pull down the latest again.

Like the registry container I would recomend (it is really a requirement) that a bind volume mount is made for the data so it survives a container restart; you don’t want to lose all your data on a container restart.

Yet again I recomend downloading and using docker-squash on the image, and saving it in your local container registry to ensure you are using a known working copy of the container image as below. Plus the dockerhub container image is huge !.

docker pull gitlab/gitlab-ce                                   # pull the image down
docker image list                                              # get the imageid of the gitlab-ce container
docker-squash -t docker-local:5000/gitlab-ce:latest imageid    # use the imageid for gitlab-ce from the list command
docker push docker-local:5000/gitlab-ce:latest                 # save so you have a fixed/stable base image
docker image rm gitlab/gitlab-ce                               # remove the origional image

The docker-squash (as of the image on dockerhub on 2021/Feb/13) will be squashed from 2.15Gb to 2.1Gb, so not necessarliy worth doing… but you should still store it in a local registry and use that copy as you do not want to download 2Gb+ every time the dockerhub image gets a 1byte change in a config file; of course still download a new image when security pathes are implemented, but do not keep checking for updates to the latest.

You can then run the container with the command below, if you did not move the image to your local repository, or did not bother to manually pull it down at all, just replace the last line with ‘gitlab/gitlab-ce’ instead of ‘docker-local:5000/gitlab-ce:latest’ to use the dockerhub native image but do change the hostname to what you are using.

export GITLAB_HOME="/srv"        # REQUIRED ON LINUX SYSTEMS
# The :Z on volume mounts is for selinux systems to ensure docker can create the mounted volumes.
docker run --detach --rm \
   --hostname gitlab-local.mydomain.org \
   --publish 443:443 --publish 80:80 --publish 5522:22 \
   --name gitlab1 \
   --volume $GITLAB_HOME/gitlab/config:/etc/gitlab:Z \
   --volume $GITLAB_HOME/gitlab/logs:/var/log/gitlab:Z \
   --volume $GITLAB_HOME/gitlab/data:/var/opt/gitlab:Z \
   docker-local:5000/gitlab-ce:latest

Logon to the gitlab-ce container with your web browser (use HTTP, it is using port 80, do not use https as you have not setup any certificates, and in a home lab you do nor really need to publist port 443 in the command above) and setup yourself as the admin id, create ssh keys for yourself and create your first project. When creating each project it provides the git commands needed to push any existing project you have to it or create a new one from scratch so it is simple to use.

There is one additional step you should do on all the desktop Linux machines you use for development. Whether you installed this into a VM or a physical machine there is the issue of ssh port 22 (which git also uses by default) being used by ssh already for the server, which is why we mapped port 5522 to port 22 in the example run command above. Apart from that you probably keept the ssh keys created for you to use for the gitlab-ce container in a seperate file. The git command provides no option to change the port number, and you also do not want to have to enter the keyfile name every time you want to do a git push.

The solution is to use a custom ssh entry, so assuming you use “gitlab-local.mydomain.org” as the name assigned to the ip-address of the host running your container, you want git to connect to port 5522 on that address, and use your private key file id_ed25519_local_gitlab for the gitlab server, you simply create a file ~/.ssh/config and add the entries below to specify the connection rules for that server (the final “Host *” entry just means all hosts that do not match prior rules use the defaults in ssh_config and behave as normal).

Host gitlab-local.mydomain.org
  IdentityFile ~/.ssh/id_ed25519_local_gitlab
  Port 5522
Host *

With that entry set when you set the remote/origin fields for your new project in git to gitlab-local.mydomain.org:YourName/yourproject.git you can simply ‘git push’ as normal without needing to remember you are using non-defaults for that hostname… although you should make a note of it somewhere or when you get your next development server you will be wondering why git commands using git@gitlab-local.mydomain.org are being rejected by the server instead of being redirected to the container :-).

That entry in ~/.ssh/config of course only applies at the user level, you could also modify /etc/ssh/ssh_config to apply the rule to all users, but as that is likely to be overwritten by the next system update it is not recomended.

You will also notice I used the default port 80 for the gitlab web interface. So on that note, remember to open port 5522 (for git push/pull) and port 80 (for the gitlab web interface) on the host VM/server so you can reach the container application :-). Unless you want to create tls/https certs for the gitlab-ce instance you do not need to use port 443 but I included it in the example command above for completeness.

Installing and configuring the Jenkins container

This is also quite simple; however I would recomend you perform a customisation of the userid used.

By default the jenkins container will create and use a docker volume for its storage, allowing data to be retained across container restarts. If you are happy with that and are comfortable you know how to clone/backup/restore docker volumes the customisation step is entirely optional.

However as you may have quessed by now I like bind volume mounts to keep the data on the host filesystem. As the jenkins container data survives restarts as it is on a docker volume the only real advantage for me in using a bind volume mount in this case is that the data is able to be backed up in my regular bacula backups and I don’t need to worry about the complexity of backing up a docker volume.

The issue with a bind mount in this case is that the jenkins container has the jenkins user defined as uid 1000 which is the first system user created on a linux server, so your VM host will already have a user using that uid and if you define multiple nodes it could be a different userid owning that uid on each server.

Your options are to chown the filesystem you have chosen as the bind volume mount to that existing user on the host plus chown agent filesystems on each node to whatever userid you wish to run a jenkins agent under; or what I prefer to do is create a new ‘jenkins’ user on the host to own the filesystem and alter the container image to use the uid of that jenkins user within the container and also create a jenkins user with the same uid on all servers that will provide node services.

The later option is the better option as it provides a common userid/uid across all jenkins nodes and on nodes used to build docker containers adding the user ‘jenkins’ to the docker group provides more documentation that it is jenkins using it than if you had to use whatever userid you chose to run the agent under.

It should be noted that this step of altering the uid in the container image only affects the container and the host volume mount. The agent on a node can be started under any userid… it makes sense however to start it under a ‘jenkins’ uid on each node.

To update the container image create a Dockerfile as below replacing 6004 with the UID of the user you assigned to the ‘jenkins’ user on the host machine, change in both the Dockerfile and the following command for the build step as I like to keep track of major divergence changes in the tag where possible

FROM jenkins/jenkins:lts
USER root
RUN userdel jenkins
RUN useradd -u 6004 -d /var/jenkins_home -M --system jenkins
RUN chown -R jenkins:jenkins /var/jenkins_home

Then in the same directory as the Dockerfile run the command ” docker build -t localhost/jenkins:lts-uid6004 . ”

You should then if you decided to use a local container repository get the imageid using ‘docker image list’, tag the image as ‘docker tag imageid docker-local:5000/jenkins-lts-uid6004’ and push the image to your local container repository as a backup, and should be used instead of the localhost one, and untag the localhost one.

When starting the container note the DNS setting flags used, you need them but update then for your local DNS servers, the jenkins container needs to be able to resolve the host names used for your docker registry, source repository gitlab-ce host, and any servers you will be adding as nodes (remember the container does not have access to to the /etc/hosts file on its host server and any docker container by default is set yo only use 8.8.8.8 (google dns) (although I suppose you could map /etc/hosts into the container somehow) so your life will be much easier with a local DNS server [if you do not already have one just pick one of your servers not doing much with an up-to-date /etc/hosts file and start dnsmasq to provide one].

You should also note that as well as port 8080 being used for the web interface port 50000 is the default connection port for any agents on agent nodes. So ensure you open port 8080 and 50000 in the firewall of the VM running the jenkins container.

Now you are ready to run the container, with the command below.

if [ ! -d /var/jenkins_home ];
then
   mkdir /var/jenkins_home
   chown jenkins:jenkins /var/jenkins_home
fi
docker run -d \
  --dns 192.168.1.179 --dns 192.168.1.1 \
  -p 8080:8080 \
  -p 50000:50000 \
  -v /var/jenkins_home:/var/jenkins_home \
  --name jenkins1 \
  docker-local:5000/jenkins:lts-uid6004

Note that you could also include the option “–env JENKINS_OPTS=”–prefix=/jenkins” to make the URL to access jenkins http://xxx.xxx.xxx.xxx/jenkins/ rather than the default of http://xxx.xxx.xxx.xxx/ which is worth a mention here as if you plan to have your jenkins instance accessable behind a reverse proxy you will need a path to proxy on.

The first time you start the container use the ‘docker logs &ltcontainer-name&gt’ command (ie: ‘docker logs jenkins1’) to view the log messages, there will be a startup configuration password logged that you will need to logon to jenkins the for first time.

Note that logging on the first time takes you via a plugins install screen where you get to install lots of recomended plugins before it lets you through to the screen where you can create your first admin user.

Jenkins, and most of the documentation found for Jenkins (even youtube training examples) provides lots of information on building java and nodejs pipelines using maven/gradle/ant etc. so I assume those tools are already provided in the jenkins container image which is the ‘master’ node. I however am more interested in pipelines for building containers and compiling and packaging C programs for which it is hard to find documentation, so I have included a few examples at the end of this post.

You will need to create nodes to do any real work, for example to pipeline docker container builds you will need a node with docker installed. Setting up this first node example uses the host VM as a node as because you are running jenkins in a container under docker that server can immediately provide a docker build node :-).

In the ‘manage jenkins’ page there is a node management page, use that to define a new node which is the host running your container under docker. In the “labels” field give it a label indicating it can provde docker such as a label “docker-service”. While you can target a pipeline at a specific node/server it is better to target the pipeline at a label as you may add additional nodes later that can also provide docker. Select that you want to start the agent with a script.

At this point displaying the node will get to a page that has a button to start the agent and the java command that will be used is shown; the page should be visited and the button clicked from a browser on the server to run the node agent; guess what, the button won’t work for linux as the browser will instead want to download a jnlp file as it doesn’t know what to do with it, personally I think this is a good thing, do not start it from the browser !.

You will see that page the ‘agent’ text in the message is a link to a agent.jar file, so on your node server simply wget the url to pull the agent.jar file onto the node server.

At this point I would (having as recomended created a ‘jenkins’ user on every node) rename the agent.jar to jenkins_agent.jar and move it to /home/jenkins/bin plus create a script file containing the run command in the same directory. And always manually start it as the jenkins user… I say manually rather than in a systemd or init.d script as if the jenkins container is not running it will hammer your network (and log files) with connection retry attempts; start only when you are intending to use it and stop it when the jenkins container is stopped.

This is why starting it from the page in the web browser with a button click is a bad idea, it would run under the userid of whoever was running the browser session. When starting it manually never start the agent under the root user or a malicious Jenkinsfile could contain a command like ‘sh “/bin/rm /*”‘ which would work; limit the access the user starting the agent has.

Before the above step it is worth pointing out at this point that if setting up the docker node on the server running the jenkins container as the jenkins user was created with a home directory of the bind volume mount you will need to create a /home/jenkins on this server, on all the other nodes you would create the user with a home directory of /home/jenkins.

As this node will provide docker to pipelines remember to add the user jenkins to the docker group on the host server so the jenkins user can actually use docker.

You can then the ‘su’ to the jenkins user and run the command to start the agent, it will then show as available in the jenkins node list.

One additional note, that works immediately as you are effiectively on localhost. For adding additional nodes you will need to open firewall port 50000 on the host running the jenkins container to allow agents from other machines in your network to connect to your Jenkins container when they are started.

At this point you may be tempted to add all the software you could ever need to your VM running these CI containers and use only this one additional node to do everything. I would say don’t, you are probably just duplicating what you already have by doing so.

For example you may have a linux desktop where you are already doing a lot of GNU C/C++ development work, rather than install all the compilers onto the CI host or into a dedicated VM for a compile environment (and spending ages tracking down missind dependencies you installed on your development desktop and forgot about) just perform the extremely trival task of making your desktop a jenkins node as you already know it has all the tools needed.

And you will need other nodes for other functions as time goes by, for example I only build RPM packages at present but with CentOS no longer stable I have fired up a Debian server and are moving to building DEB packages instead; as like many people I do not plan on remaining on any RHEL stream until a viable non-vendor owned alternative to CentOS appears.

It is also worth mentioning source repositories here. We have installed gitlab-ce in a container and you will probably do most of your CI pipeline work from there, as that was the entire point of this exercise. However there is nothing to stop you from within your new jenkins container creating pipelines from other repositories such as the public github and gitlab sites as well should you also have existing github or gitlab repositories (which you should leave there if you do as a offsite backups are always a good thing).

Personally I make the source repositories available for public download as I don’t need to edit any of the files via the gitlab-ce interface and cannot be bothered with creating credentials. If you use private repositories you will need to set credentials for each source repository used when creating the pipeline.

It is also worth mentioning pipelines and nodes, specifiaccly what if you need to use multiple nodes in a pipeline. It is not always possible to perform all activities for a pipeline on a single node, for example I can do all compiles and rpm creation on a single node but from that node have to scp the resulting rpm to another node where the next pipeline step will run on that seperate node to move the rpm into a public repo and rebuild the repo index; to do so you must be aware that every ‘step’ in the pipeline will download the git project in its entireity, so even on the last node where it does nothing but need to move a file and run a createrepo command the entire git project is downloaded to that node anyway even though it will never be used. So unless you do require multinodes in a build do not select nodes at a step level but use a global node selection for the entire pipeline so the project is only downloaded once. Using multiple nodes can be inefficient not just for that reason but as you must move (scp) the stages of work between the nodes as needed you are really needing in the pipeline to refer to nodes by node name (hostname) rather than labels so you know exactly where to move things about, and where possible labels are preferred in a pipeline in preference to node names.

Also on pipelines, you would normally use a Jenkinsfile to define a pipeline, if you create a project expecting one and you add a source that does not have one in the project no build will occur; worth remembering if you wonder why your pipelines do not build.

As promised, a few Jenkinsfile examples

First a note on comments. You may use C style comments such as “/* multiline comments */” or for single line comments use “//” at the start of each line being commented. Mentioned as comments are always important :-).

Also it is important to note that these are examples; not working files (well they work perfectly well, but you need all the backend files in my source repositories for them to work for you, as there is a lot more to a pipeline than just a Jenkinsfile; you need all the code as well :-)..

The below Jenkinsfile will build a container image, the image will be tagged with the build number. Only when you are happy would you tag an image as ‘latest’ or give it a production version number and make it available.

Note that it requires that you have defined a node that has a label ‘docker-service’, in the setup steps above it was briefly discussed how to setup the host server running docker that provides the environment for the jenkins container as a docker-service node; but you can add as many as you want as long as one of the nodes are available.

pipeline {
   // Must run the build on a node that has docker on it
   agent { label 'docker-service' } 

   stages {
      // build the docker image
      stage("build") {
         steps {
            sh 'docker build -t localhost/ircserver:$BUILD_ID .'
         }
      }

      // squash the layers in the docker image
      stage("compress") {
         steps {
            echo "--- Squashing image ---"
            sh 'bash ./squash_image.sh localhost/ircserver:$BUILD_ID'
         }
      }

      // push the new image to the local repository
      stage("publish") {
         steps {
            // I would want to tag it as latest, and that is done by id not name
            echo "--- Would deploy all, but I won't, only new master branches ---"
            sh '''
               id=`docker image list | grep 'localhost/ircserver' | grep $BUILD_ID | awk {'print $3'}`
               if [ "${id}." != "." ];
               then
                  if [ "$BRANCH_NAME." == "master." ];
                  then
                     docker image tag ${id} docker-local:5000/ircserver:latest
                     docker push docker-local:5000/ircserver:latest
                     echo "Pushed to registry docker-local:5000 as ircserver:latest"
                     docker image rm docker-local:5000/ircserver:latest
                  else
                     docker image tag ${id} docker-local:5000/ircserver:$BRANCH_NAME
                     docker push docker-local:5000/ircserver:$BRANCH_NAME
                     docker image rm docker-local:5000/ircserver:$BRANCH_NAME
                  fi
                  # No longer needed on worker node
                  docker image rm localhost/ircserver:$BUILD_ID
               else
                  echo "No container image found to re-tag"
               fi
               # end of sh script
            '''
         }
      }
   } // end of stages

   post {
      // run if there was a failure, may be image layers left lying around
      failure {
         echo "*** Manully remove container and image workspaces ***"
      }
   }
}

The below example can be used to compile a C program. Ideally you would run it through a few “test” steps as well before packaging, but as they would be specific to your program you will need to add those yourself.

This example uses a global agent of none and uses step level selection of agents to run steps across multiple nodes.

pipeline {
   // must run on a server with gcc and package building commands on
   // then must run on public facing repo server to update the repo listings
   agent none
   environment {
      APP_VERSION="0.01"
      REPO_SERVER="10.0.2.15"
      REPO_DIR="/var/tmp/testrepo"
   }
   stages {
      stage("build") {
         agent { label 'gcc-service' }
         steps {
            sh 'echo "I AM IN $PWD"'
            sh 'make testprog MK_BRANCH_NAME=$BRANCH_NAME'
            echo 'What happened ?, if it worked well on the way'
         }
      }
      stage("package") {
         agent { label 'gcc-service' }
         steps {
            echo "Need to place rpm packaging commands in here"
            sh '''
               env
               echo "Still in directory $PWD, filelist is"
               machinetype=`uname -m`
               touch projectname-$APP_VERSION.${machinetype}.rpm
               ls
            '''
         }
      }
      stage("pushtorepo") {
         agent { label 'gcc-service' }
         steps {
            echo "Need to place scp commands here to copy to repo server"
            sh '''
               bb=`whoami`
               machinetype=`uname -m`
               echo "would run: scp projectname-$APP_VERSION.${machinetype}.rpm ${bb}@$REPO_SERVER:$REPO_DIR/${machinetype}"
            '''
         }
      }

      stage("creatrepo") {
         agent { label 'yumrepo-service' }
         steps {
            echo "And why so messy, parts must run on different servers"
            echo "Would run: cd $REPO_DIR;creatrepo ."
         }
      }
   } // end stages
}
Posted in Automation, Unix | Comments Off on Setting up a CI pipeline on Linux for your home lab

Is OpenIndianna a viable replacement for CentOS8, for me no

A post in my ongoing saga to find a replacement operating system for CentOS8. These posts are primarily notes I am making for myself on why I consider an OS an acceptable replacement or something I personally cannot use to replace my CentOS8 servers. That are certainly not detailed posts or even fully evaluated recomendations for anyone else… I am not fully evaluating each OS but only focusing on what I need and whether the OS provides it.

OpenIndianna is a production ready operating system, and depending upon what you want to use your servers for this is an option that should be considered as an OS; but not as a drop in replacement for CentOS8 as Oracle Linux and Debian can be (as documented in my earlier posts).

From the requirements I need, which is puppet and docker the good news is that puppet is available in the standard repositories. The bad news is that as OpenIndianna does not use a ‘Linux kernel’ docker is not available for this operating system.

Despite docker not being available OpenIndianna does of course provide Solaris Zones for application isolation, although they are fairly heavyweight compared to docker containers and of course configured completely differently and not truely isolated as the easiest way of networking a zone is to give it a public ip on your network.

Openindianna is available from the OpenIndianna download page if you are interested in giving it a try.

Also you will need to learn a completely new set of packaging tools, and a new set of system management tools (svcadm/zonecfg/zfs/zpool etc.) For example instead of “systemctl status httpd”, “systemctl enable httpd”, “systemctl start httpd”, “systemctl status httpd” you would use the below.

root@openindiana:/etc/apache2/2.4# svcs -a | grep apache2
disabled       21:38:40 svc:/network/http:apache24
root@openindiana:/etc/apache2/2.4# svcs svc:/network/http:apache24
STATE          STIME    FMRI
disabled       21:38:40 svc:/network/http:apache24
root@openindiana:/etc/apache2/2.4# svcadm enable svc:/network/http:apache24
root@openindiana:/etc/apache2/2.4# svcs svc:/network/http:apache24
STATE          STIME    FMRI
online         22:32:15 svc:/network/http:apache24
root@openindiana:/etc/apache2/2.4# 

However if you are used to creating service files for systemd you should have no trouble creating svc files for the OpenIndianna operating system.

Likewise the package management commands are different to CentOS, fortunately most are wrapped by the ‘pkg’ command. However

root@openindiana:/etc/apache2/2.4# pkg search php-pdo
INDEX      ACTION VALUE                                        PACKAGE
pkg.fmri   set    openindiana.org/web/php-56/extension/php-pdo pkg:/web/php-56/extension/php-pdo@5.6.35-2018.0.0.1
pkg.fmri   set    openindiana.org/web/php-54/extension/php-pdo pkg:/web/php-54/extension/php-pdo@5.4.45-2016.0.0.2
pkg.fmri   set    openindiana.org/web/php-55/extension/php-pdo pkg:/web/php-55/extension/php-pdo@5.5.38-2016.0.0.1
root@openindiana:/etc/apache2/2.4# pkg install php-pdo
Creating Plan (Solver setup): -
pkg install: No matching version of web/php-70/extension/php-pdo can be installed:
  Reject:  pkg://openindiana.org/web/php-70/extension/php-pdo@7.0.33-2020.0.1.5
  Reason:  This version is excluded by installed incorporation consolidation/userland/userland-incorporation@0.5.11-2020.0.1.12901
root@openindiana:/etc/apache2/2.4# 

It is a bit cleaner in a full package update in that it builds a new boot environment.

root@openindiana:/etc/apache2/2.4# pkg update
            Packages to remove:   1
           Packages to install:  10
            Packages to update: 481
       Create boot environment: Yes
Create backup boot environment:  No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            492/492   16174/16174  408.9/408.9  219k/s

PHASE                                          ITEMS
Removing old actions                       6025/6025
Installing new actions                     5528/5528
Updating modified actions                16031/16031
Updating package state database                 Done 
Updating package cache                       482/482 
Updating image state                            Done 
Creating fast lookup database                   Done 

A clone of openindiana exists and has been updated and activated.
On the next boot the Boot Environment openindiana-2021:01:02 will be
mounted on '/'.  Reboot when ready to switch to this updated BE.

It should be noted that after the full update the ‘pkg install php-pdo’ command shown above did actually work. However trying to install other packages I would need proved frustrating, the package management system seems to have no equivalent of the dnf/yum behaviour of simple selecting a match for what is already installed.

root@openindiana:~# pkg install php-opcache \
>   php-intl \
>   php-common \
>   php-soap \
>   php-mbstring \
>   php-gd \
>   php-json \
>   php-fpm \
>   php-xml \
>   php \
>   php-cli \
>   php-mysqlnd \
>   php-pecl-http \
>   php-bz2 \
>   php-zip \
>   php-pclzip

pkg install: The following pattern(s) did not match any allowable packages.  Try
using a different matching pattern, or refreshing publisher information:

        php-pclzip
        php
        php-mysqlnd
        php-pecl-http
        php-xml
'php-common' matches multiple packages
        pkg://openindiana.org/web/php-73/php-common
        pkg://openindiana.org/web/php-70/php-common
'php-fpm' matches multiple packages
        pkg://openindiana.org/web/php-73/php-fpm
        pkg://openindiana.org/web/php-70/php-fpm
'php-cli' matches multiple packages
        pkg://openindiana.org/web/php-73/php-cli
        pkg://openindiana.org/web/php-70/php-cli

Please provide one of the package FMRIs listed above to the install command.
root@openindiana:~# 

So you have to manually determine what is actually installed. Incidentally the ‘pkg update’ command upgraded everything to 70 so I could assume that the 73 is only partially implemented at this point or 73 would have been installed ?.

root@openindiana:~# pkg list | grep php
web/php-70/extension/php-pdo                      7.0.33-2020.0.1.5          i--
web/php-70/php-cli                                7.0.33-2020.0.1.5          i--
web/php-70/php-common                             7.0.33-2020.0.1.5          i--
root@openindiana:~#

Also it highlights that the packages I need to move my website to an OpenIndianna server may not exist making it even less likely it is a viable alternative for me.

You will also have seen that a full package update logged messages saying a new boot environment was created to boot from. So as well as learnming new package commands you must learn new OS management commands as a result of wanting to install packages. As a side note it is interesting that a new BE has a new copy of /var as I’m not aware of any OS configuration files that live under there so just retaining the existing /var with website pages, mysql/mariadb databases, logfiles etc. would seem to me to make more sense that replicating them all in a new boot environment. But then what do I know; and this post is not on how to use OpenIndianna but on why I would find it difficult to replace a CentOS server farm with OpenIndianna.

root@openindiana:~# beadm list
BE                     Active Mountpoint Space Policy Created
openindiana            -      -          7.65M static 2020-08-27 14:14
openindiana-2021:01:02 NR     /          5.76G static 2021-01-02 03:35

root@openindiana:~# zfs list
NAME                                    USED  AVAIL  REFER  MOUNTPOINT
rpool                                  9.90G  28.4G    33K  /rpool
rpool/ROOT                             5.77G  28.4G    24K  legacy
rpool/ROOT/openindiana                 7.65M  28.4G  2.43G  /
rpool/ROOT/openindiana-2021:01:02      5.76G  28.4G  2.52G  /
rpool/ROOT/openindiana-2021:01:02/var  1.94G  28.4G   910M  /var
rpool/ROOT/openindiana/var             2.49M  28.4G   956M  /var
rpool/dump                             2.00G  28.4G  2.00G  -
rpool/export                             77K  28.4G    24K  /export
rpool/export/home                        53K  28.4G    24K  /export/home
rpool/export/home/mark                   29K  28.4G    29K  /export/home/mark
rpool/swap                             2.13G  30.5G  25.5M  -
root@openindiana:~# 

On the firewall side OpenIndianna uses “IP Filter” as the firewall solution. As such users coming from a CentOS environment where iptables and NF tables have been used (either hidden from the user behind firewalld or used manually) will have to learn a new set of commands; and rewrite any customised automated firewall deployment and checking tools used.

It should also be noted that I did previously use solaris-x86 (back in the day when SUN still owned it) and OpenIndianna and found everything I needed as far as applications/packages go compiled with the GCC compiler quite happily from source so with a little bit of work everything needed is available… apart from Docker of course that needs a Linux kernel. I also found ZFS quotas extremely useful, athough I only played with zones as they provided no great benefit to me. It most certainly is a good solution if you are looking for a ‘all-in-one’ server for you applications, not so great if you are looking for application isolation or cloud deployment.

One thing that must be mentioned is that I have found it impossible to get OpenIndianna running under OpenStack (can do through the installation procedure and install, but not boot afterward). No ‘cloud images’ are available and the repositories do not seem to contain a ‘cloud-init’ package for it either (at least a ‘pkg search’ cannot find one). It does not support as much hardware as linux distributions do so you may run into issues, although it will run under linux KVM with the issue that on a reboot the KVM instance goes into a paused state and you must issue virst reset xxx and virsh resume xxx to let the reboot continue which makes it not the ideal candidate for an OS to run under KVM. This is also unlikely to ever be possible as this operating system is simply not designed for cloud use, it is a server operating system that runs best on native bare hardware.

For me personally I have been isolating applications I use (those that do not need access to the entire system anyway) into standard containers for docker and K8 use, so the inability to use docker prevents me from using this as a replacement for CentOS8; docker has become such a part of my application deployment I have even needed to run a local registry and moving applications from containers back to local server applications seems counter-intuative to me.

So for me it is not an option as a ‘quick’ replacement for CentOS8. For anyone setting up new server environments from scratch and no intention of using containers (although zones are available) it is a good option.

A note for desktop users; I have only ever used server installs so I do not know how it behaves as a desktop, however it is a good bet that if you rely on applications from rpmfusion, copr, or even snap packages you will probably find it wanting. I would consider it a ‘server’ OS.

Posted in Unix | Comments Off on Is OpenIndianna a viable replacement for CentOS8, for me no

Is openSUSE Leap a viable replacement for CentOS8, for me no

A post in my ongoing saga to find a replacement operating system for CentOS8. These posts are primarily notes I am making for myself on why I consider an OS an acceptable replacement or something I personally cannot use to replace my CentOS8 servers. That are certainly not detailed posts or even fully evaluated recomendations for anyone else… I am not fully evaluating each OS but only focusing on what I need and whether the OS provides it.

OpenSUSE ‘leap’ Seems to have a support (LTS) cycle of around 18 months between point releases.
‘Leap’ is the more stable release, there is also the ‘Tumbleweed’ option which is updated more frequently with ‘stable’ packages and is more for testing the latest features not in the more stable ‘Leap’ release; potentially a more stable version than CentOS but in the same place in the new position in the development stream CentOS ‘stream’ has been placed in.

I decided to look at the ‘Leap’ release 15.2 has now been released. The ISOs for all platform types are available at http://download.opensuse.org/distribution/leap/15.2/.

Do not for evaluation purposes reply “yes” to the do you want to enable network repositories now prompt at install time. If you do it will try to download over 4Gb of data (it seems like it will use the network repositories in preference to what is on the install media). Reply “no” and it will install from the install media.

Also I could find nowhere to define a static network setup duting installation, so it installed using dhcp addressing which I do not want. Also after installation iin the “Yast Network Settings” you will see that by default sets the hostname based upon what if gets from the dhcp server, and there is nowhere to set a static address. Also under “settings/network” tehre is no way of setting a static ip-address. There is documentation on how to do so from a GUI interface using “Yast Network” configuration at https://doc.opensuse.org/documentation/leap/reference/html/book-opensuse-reference/cha-network.html (tip: search the page for “set a static”) however it does not work as any attempt to open the overview tab just throws up an error saying it is managed by NetworkManager. After much googling in the Yast Network configuration change the network from networkManager to Wicked and then you can set a static ip-address and set the hostname (took a lot of googling to find that, god knows how on a server (no gui) you would do it from the command line) and you must still use the hostname/dns tab to set a static hostname and define custom dns settings; despite it saying changes are applied there is no change in ipaddress and a ‘systemctl retstart NetworkManager’ and ‘systemctl retstart network’ have no effect and a reboot is needed to pick up the change.

There are probably too many incompatabilities between this and other *nix’s for it to be an immediately viable alternative (specifically package management). Examples

  • to find what package containes ifconfig the command is “cnf ifconfig”… that command does however tell you the exact command to use to install the package which is nice
  • to install the package the command to use is “zypper install net-tools-deprecated”

While I am used to dnf/yum/rpm, and can manage to fight my way through apt/dpkg, do I really want to learn a completely new packaging system just to replace a CentOS8 server; not at the moment.

The openSUSE community do provide cloud images (including for openstack) which is one of my requirements for a replacement OS. These can be found at https://download.opensuse.org/repositories/Cloud:/Images:/Leap_15.2/images/ although I have not tested one.

Docker is available for openSUSE. Installation instructions for Yast2 gui and command line options (and a workaround for a bug where it ignores brtfs quotas) is at https://en.opensuse.org/Docker.

Mariadb is available, a walkthrough on how to install it is at https://computingforgeeks.com/how-to-install-mariadb-on-opensuse/ and involves adding a seperate epository.

Puppet-agent is not available for openSUSE Leap 15.2, documented at https://software.opensuse.org/package/puppet with the comment “There is no official package available for openSUSE Leap 15.2”. So to replace many servers would be a lot of manual work.

So while openSUSE looks interesting as a replacement for centOS8, it is not really an option if you are looking for a quick and easy replacement as there are new packaging commands to learn, and the major detail that puppet-agent is not supported (yet) makes replacing more than a single server impractical for me.

If you are not replacing an existing rhel based server farm but looking to implement a new solution from scratch then this OS could be considered as an option for you. Although one thing to consider is that leap point releases seem to become available at around 18 month intervals so LTS can be considered as 18 months (the commercial offering has LTS of 10 years betwen major releases, but of course as a CentOS replacement we are only looking at free options).

Posted in Unix | Comments Off on Is openSUSE Leap a viable replacement for CentOS8, for me no

Is Debian a viable alternative to CentOS8, for me yes

Since the RedHat announcement that CentOS8 ‘stable’ is becoming a test system on centOS8 ‘stream’ rather than being provided as a stable system anymore many people are looking for alternative operating systems.

I have already covered in the previous post how to successfully with minimal effort convert a CentOS8 system to Oracle Linux 8. This post is on whether for my use Debian can also be considered an alternative. The answer is yes, and these are my notes on that.

So this minimal post is on the steps needed to create a Debian server that provides all the functionality of one of my more complex servers.

Install a new bare Debian10 (buster) system, if you intend to use it as a web server select minimal webserver at this time as well to install apache2.

This entire post is based upon my evaluation of installing a Debian10 system to replace a CentOS8 one; that was running a puppet-agent, docker for container support, mariadb, a bacula-client for backups etc. Basically a rather complicated system.

Obviously you should set a static ip-address for the server.

Once the system is installed the below commands create an environment that is from initial testing a workable clone of my centOS8 system.

apt install net-tools               # <=== for ifconfig
apt update
apt upgrade
#
# mariadb
apt install mariadb-server
#
# nrpe
#    notes on nrpe
#       on CentOS as package nrpe managed with 'systemctl xx nrpe, use systemctl xx nagios-nrpe-server on debian
#       on centOs plugins were /usr/lib64/nagios/pluins, on Debian /usr/lib/nagios/plugins
#       on CentOS custom commands were defined in disrectory /etc/nrpe.d, on debian in disrectory /etc/nagios/nrpe.d
#    those notes are important as puppet rules for a rhel system cannot be resused on debian, in a mixed server
#    environment there will be a lot if if/else and evan template customisations to cope with rhel and debian
#    (this is the most incompatible application I have found in a conversion from rhel to debian)
apt install nagios-nrpe-server
#
# Docker-ce
apt-get remove docker docker-engine docker.io containerd runc
apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
apt-update
apt-get install docker-ce=5:20.10.1~3-0~debian-buster docker-ce-cli=5:20.10.1~3-0~debian-buster containerd.io
#
# puppet agent
wget https://apt.puppetlabs.com/puppet7-release-buster.deb
dpkg -i puppet7-release-buster.deb
apt-get update
apt install puppet-agent
#
# Modules I need for my website
# note: centos used php-pecl-zip, not available in debian; so adding all available pecl and zip php modules
apt install \
  php-pdo \
  php-opcache \
  php-intl \
  php-common \
  php-soap \
  php-mbstring \
  php-gd \
  php-json \
  php-fpm \
  php-xml \
  php \
  php-cli \
  php-mysqlnd \
  php-pecl-http \
  php-bz2 \
  php-zip \
  php-pclzip 
a2enmod proxy_fcgi setenvif
a2enconf php7.3-fpm
systemctl restart apache2
#
# bacula-client
#   notes: I let puppet do this and perform all configuration needed.
#          if not using a congig management tool you will have to customise
#          to define stoage and director servers
apt install bacula-client
#
# Done

I also copied across a mariadb full database dump from my centOS8 server and loaded it into the Debian mariadb server with no issues.

I also tested containers built on Fedora33 that were running on my CentOS8 system in rather complex network configurations and they ran without issues on Debian.

The key things to note (mainly for automated configuration rules for software deployment) are

  • On CentOS8 NRPE is packaged as 'nrpe', service nrpe.service,
    plugins were /usr/lib64/nagios/pluins, custom commands in /etc/nrpe.d
    On Debian NRPE is pacaged as 'nagios-nrpe-server', service nagios-nrpe-server.service,
    plugins are in /usr/lib/nagios/pluins, custom commands in /etc/nagios/nrpe.d
  • On CentoS8 the webserver is package 'http' and service httpd.service.
    On Debian the package is 'apache2' and service apache2.service
  • There is no 'wheel' group on Debian. If adding admin users probably best to put them in the 'sudo' group
  • Most of the changes I was able to implement in puppet fairly easily by wrapping the existing configurations
    in a "if ( $facts['os']['family'] == "RedHat" ) { }" block follwed by
    "elsif ( $facts['os']['family'] == "Debian" ) { }" and a default catchall else block for other operating systems I may test. It should be noted that the if/elsif/else block had to be implemented in quite a few rules

It should also be noted that neither iptables ot firewalld are installed by default; that suits me perfectly as I use iptables on almost all servers internet facing and firewalld on desktops and internal servers behind firewalls that do not need the explicit fine grained detail iptables provides; so not having to unistall either and just selcting the one I wish to use is an advantage.

I also have not actually run a full clone of any of my servers on Debian. While there should be no issues (tested to the point I should only need to copy over all the web directories) all my alternatives to CentOS evaluations are being done on minimally sized servers, a full test only when I have decided on which OS to move to.

Debian OS releases have shorter support cycles than Oracle Linux (which uses RHEL as a base so should have LTS until 2029). For the latest release of Debian (buster) it moves from debian support in 2022 to volumteers maintaining security support until 2024. Effectively a rhel based release can keep running for 7-8 years and a Debian based one 2-3 years before a major OS upgrade is needed (reference: https://wiki.debian.org/LTS.

So while Debian is a workable replacement for CentOS8 anyone looking for stability is still likely to move toward Oracle Linux.

I still intend to evaluate openSUSE in a later post also, as I wish to look at as many alternatives as possible. However openSUSE 'Leap' (the more stable release, there is also a 'tumbleweed' release for those wanting changes faster) seems to require upgrading between point releases at around 18 month intervals. The commercial release has support for 10 years with each release but of course for CentOS8 replacements I am only looking at the free alternatives. But I may correct that statement as I do more research into openSUSE in that later post.

Posted in Unix | Comments Off on Is Debian a viable alternative to CentOS8, for me yes

The Oracle conversion script to convert CentOS to Oracle Linux, and an alternative

As mentioned in my previous post there is a script supplied to convert CentOS systems to Oracle Linux, available at https://github.com/oracle/centos2ol. As mentioned in that post it is not safe to use at the currect time.

At the end of this post I will list steps that have worked for me that you should use in preference to this script. It’s at the end as this post is primarily on why not to use the supplied conversion script.

Obviously as Oracle Linux relies on RedHat continuing to follow the software license guidelines and make available all the sources they modify it can only be hoped they continue to do so to ensure Oracle Linux remains a viable alternative to CentOS. However as the RedHat announcement makes CentOS unsafe for production work any more Oracle Linux is the currently available free alternative that requires the least conversion work.

As of 26 December 2020 using that script on a fresh centOS8 install from CentOS-8-x86_64-1905-dvd1.iso results in the below.

The error that stopped the converted system booting after the conversion script was run was

Failed to switch root: Specified switch root path '/sysroot' does not seem to be an OS tree. os-release file is missing.

So I built a new CentOS8 KVM instance as a bare server install. Prior to running the conversion script

[mark@c8toOL8-test etc]$ cat os-release
NAME="CentOS Linux"
VERSION="8 (Core)" 
ID="centos"
ID_LIKE="rhel fedora" 
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8 (Core)" 
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="8"

[mark@c8toOL8-test etc]$ ls -la *release
-rw-r--r--. 1 root root  38 Aug 14  2019 centos-release
-rw-r--r--. 1 root root 420 Aug 14  2019 os-release
lrwxrwxrwx. 1 root root  14 Aug 14  2019 redhat-release -> centos-release
lrwxrwxrwx. 1 root root  14 Aug 14  2019 system-release -> centos-release
[mark@c8toOL8-test etc]$

[mark@c8toOL8-test etc]$ ls /etc/yum.repos.d
CentOS-AppStream.repo   CentOS-CR.repo         CentOS-fasttrack.repo   CentOS-Sources.repo
CentOS-Base.repo        CentOS-Debuginfo.repo  CentOS-Media.repo       CentOS-Vault.repo
CentOS-centosplus.repo  CentOS-Extras.repo     CentOS-PowerTools.repo
[mark@c8toOL8-test etc]$

After running the supplied conversion script we are left with

[root@c8toOL8-test centos2ol]# cd /etc
[root@c8toOL8-test etc]# cat os-release
cat: os-release: No such file or directory
[root@c8toOL8-test etc]# ls *release
ls: cannot access '*release': No such file or directory
[root@c8toOL8-test etc]# ls /etc/yum.repos.d
switch-to-oraclelinux.repo

So the conversion script seems to get only part of the way through, far enough
to wipe out CentOS but not far enough to make OL usable.

So do not use the oracle conversion script.

This does not mean that the script may not become usable at some future point, it is just not safe to use at present.

Unrelated information

On a fresh Oracle Linux 8 install these are the files created

[mark@oracle8-freshinstall etc]$ cat os-release
NAME="Oracle Linux Server"
VERSION="8.3"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.3"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.3"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:3:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"

ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.3
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.3

[mark@oracle8-freshinstall etc]$ ls -la *release
-rw-r--r--. 1 root root  32 Nov  5 12:58 oracle-release
-rw-r--r--. 1 root root 479 Nov  5 12:58 os-release
-rw-r--r--. 1 root root  45 Nov  5 12:58 redhat-release
lrwxrwxrwx. 1 root root  14 Nov  5 12:58 system-release -> oracle-release
[mark@oracle8-freshinstall etc]$ 

[mark@oracle8-freshinstall etc]$ ls /etc/yum.repos.d
oracle-linux-ol8.repo  uek-ol8.repo

An alternative upgrade method that seems to work

This method was suggested in a response to the origional CentOS moving to be stream only post at https://blog.centos.org/2020/12/future-is-centos-stream/ by commenter “Phil”. I have edited it to what works for me (which was basically correcting ‘centos-release’ and highlighting the importance of needing yum-utils installed before disabling the centos repositories; plus the reset module step I needed to use.

setenforce 0
dnf -y install yum-utils       # <---- this MUST be installed
repobase=http://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage
wget \
  ${repobase}/redhat-release-8.3-1.0.0.1.el8.x86_64.rpm \
  ${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \
  ${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \
  ${repobase}/oraclelinux-release-el8-1.0-9.el8.x86_64.rpm

rpm -e centos-release --nodeps
dnf --disablerepo='*' localinstall ./*rpm
:> /etc/dnf/vars/ociregion
dnf remove centos-linux-repos
dnf --refresh distro-sync --allowerasing --nobest

The last dnf command results in

[root@oracle8 ~]# dnf --refresh distro-sync --allowerasing --nobest
Oracle Linux 8 BaseOS Latest (x86_64)                                                                                                            28 kB/s | 2.7 kB     00:00    
Oracle Linux 8 Application Stream (x86_64)                                                                                                       36 kB/s | 3.1 kB     00:00    
Oracle Instant Client for Oracle Linux 8 (x86_64)                                                                                                28 kB/s | 2.5 kB     00:00    
Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux 8 (x86_64)                                                                       26 kB/s | 2.5 kB     00:00    
Dependencies resolved.
The operation would result in switching of module 'llvm-toolset' stream 'rhel8' to stream 'ol8'
Error: It is not possible to switch enabled streams of a module.
It is recommended to remove all installed content from the module, and reset the module using 'dnf module reset ' command. After you reset the module, you can install the other stream.

Easily fixed with

[root@oracle8 ~]# dnf module reset llvm-toolset
Last metadata expiration check: 0:01:22 ago on Sat 26 Dec 2020 11:41:27 NZDT.
Dependencies resolved.
================================================================================================================================================================================
 Package                                   Architecture                             Version                                     Repository                                 Size
================================================================================================================================================================================
Resetting modules:
 llvm-toolset                                                                                                                                                                  
Transaction Summary
================================================================================================================================================================================
Is this ok [y/N]: y
Complete!

After which rerunning the “dnf –refresh distro-sync –allowerasing –nobest” works.

There was one failure

Failed:
  pcp-webapi-4.3.0-3.el8.x86_64 

This left “package-cleanup –problems” reporting problems for that; resolved with “rpm -e pcp pcp-webapi pcp-mnager –nopreun” (noreun was required as the pre-uninstall script kept erroring).

“package-cleanup –orphans” showed quite a few entries, these are packages installed that are not available from the configured repositories; and included things like buildah and podman that certainly should be.

“dnf autoremove” also listed a few packages I would want to keep as available for removal.

The supprising issue was that “shutdown -r now” refused to shutdown, “init 6” has no effect. Neither did “halt” or “reboot”; too many interface changes ?. I had to trigger a ‘force-off’ poweroff from the virt-manager interface and start the KVM again from there. However after that first forced restart reboots were behaving normally again.

So that would be my recomended upgrade path, not the script on github.

Future posts planned

While conversion to Oracle Linux is the least disruptive as far as compatibility goes, I will also be looking at getting tools like docker-ce, puppet-agent, bacula-client etc running on Debian; which if successful may result in a lot less rhel based posts here and a lot more debian based ones.

Posted in Unix | Comments Off on The Oracle conversion script to convert CentOS to Oracle Linux, and an alternative

Is Oracle Linux a viable alternative to CentOS

By now you are all aware of the RedHat/IBM decision to destroy CentOS as a viable stable platform. Oracle has been suggested on many Forums as an alternative.

While some responses on forums indicate a belief that Oracle Linux is based on CentOS and therefore would also be unstable in the future the Oracle blogs themselves indicate their builds are from the RedHat sources not CentOS, and generally they release updates to RHEL faster than done previously by CentOS.

Oracle themselves are aware they have been touted as an option, and have even published a script that can be used to convert a running CentOS system to a Oracle system, for CentOS8 to Oracle Linux 8 that script is located at https://github.com/oracle/centos2ol. DO NOT USE THAt SCRIPT, see update notes at end of this post.

The key thing is the script is for CentOS systems already running, what about deploying Oracle Linux 8 from scratch then. The main issues I have are

  • the install DVD media is 8.6Gb, double the size of CentOS
  • desktop installs require accepting a license agreement, hands off install may be difficult
  • oracle logos are all over the place
  • there is no ‘epel-release’ package, you must instead ‘dnf install oracle-epel-release-el8’, that is not obvious but obviously must be installed to obtain most of the packages everyone needs such as nrpe or even utilities like iftop
  • where I refer to a ‘desktop’ install in this post it is the default install software selection of ‘server with gui’; as in a bit of a rush as I was only installing to test for this post I did not change the default to ‘Workstation’
  • there appear to be no publically available Oracle Linux cloud images (see below paragraph)

Many CentOS users would deploy to an in-house cloud, normally OpenStack. Images that will work under OpenStack are listed at https://docs.openstack.org/image-guide/obtain-images.html and Oracle Linux is not one of them. The CentOS stable images before Redhat moved CentOS to “stream” are there. Also listed are RHEL ones but a Redhat Enterprise license subscription is needed to download those. So the options now are Debian, Fedora, Ubuntu, SUSE or unoffical images for BSD based servers.
Oracle Linux images seem to be only provided for “Oracle cloud” and not publicly available for those running OpenStack. A google search finds lots of examples on how to run OpenStack on Oracle Linux servers, nothing on how to run Oracle Linux under OpenStack. Sorry Oracle, most corporations have OpenStack as internally hosted ‘lab’ cloud systems, no AWS/Google/Azure/Oracle interfaces in a ‘lab’ environment.

However I have checked and ‘cloud-init’ is a package available with Oracle Linux so it should be simple to create your own custom images. Or if you don’t want the effort launch a CentOS8 instance, use the Oracle provided script to convert it to OL8, remove all the files indicating cloud-init has already run, shutdown and snapshot to use as a new cloud image.

For desktop users it should be noted that neither of the addtional required steps for CentOS8 or RHEL8 (enabling powertools or codeready-builder) recommended to be issued as part of enabling the rpmfusion repositories work on Oracle Linux 8. However even without those the repositories seem to work (at least I was able to install vlc) although not being able to complete all the setup steps for the repository may result in issues later on. Obviously rpmfusion is a repository that must be accessable to use the system as a ‘desktop’. Setup instructions for that are still at https://rpmfusion.org/Configuration.
Update:26Dec2020: the OL equivalent to powertools is ol8_codeready_builder

Podman and Docker

For developers podman and buildah are installed by default on a desktop install (and server installs with ‘container management tools’ selected), and if you prefer docker Oracle have an article on installing docker on OL8 at https://oracle-base.com/articles/linux/docker-install-docker-on-oracle-linux-ol8which does not work without an additional flag as OL8 packages installed for podman are incompatible with the docker ones. You must allow existing packages to be removed, using the –allowerasing option to install docker works and shows

===========================================================================================================
 Package                    Arch    Version                                       Repository          Size
===========================================================================================================
Installing:
 docker-ce                  x86_64  3:20.10.1-3.el8                               docker-ce-stable    27 M
Installing dependencies:
 containerd.io              x86_64  1.4.3-3.1.el8                                 docker-ce-stable    33 M
     replacing  runc.x86_64 1.0.0-68.rc92.module+el8.3.0+7866+f387f528
 docker-ce-cli              x86_64  1:20.10.1-3.el8                               docker-ce-stable    33 MCentOS-8-x86_64-1905-dvd1.iso
 docker-ce-rootless-extras  x86_64  20.10.1-3.el8                                 docker-ce-stable   9.1 M
 libcgroup                  x86_64  0.41-19.el8                                   ol8_baseos_latest   70 k
Removing dependent packages:
 buildah                    x86_64  1.15.1-2.0.1.module+el8.3.0+7866+f387f528     @AppStream          28 M
 cockpit-podman             noarch  18.1-2.module+el8.3.0+7866+f387f528           @AppStream         4.9 M
 podman                     x86_64  2.0.5-5.0.1.module+el8.3.0+7866+f387f528      @AppStream          51 M
 podman-catatonit           x86_64  2.0.5-5.0.1.module+el8.3.0+7866+f387f528      @AppStream         752 k

Transaction Summary

So while most developers will be using podman on their desktops servers would normally run docker (in prod as a docker swarm) or kubernetes as the container environments. However the –allowerasing option does install a working docker so you can use either.

I would have added a comment to the article highlighting that requirement but going to the article comments page results in the error “Prepared Statement Error: Table ‘./oraclebasecms/cms_page_comment_uuids’ is marked as crashed and should be repaired “; I would hope the Oracle Linux development teams do a better job that the website database teams. But the steps are

dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
dnf install -y docker-ce --nobest --allowerasing

I have tried an image built on Fedora33 with podman that runs happily under docker on centOS8, and yes testing shows it also runs happily on Oracle Linux 8 with the UEK kernel.

Puppet

Puppet agent also installs perfectly well with

rpm -Uvh https://yum.puppet.com/puppet7-release-el-8.noarch.rpm
dnf search puppet-agent
dnf install puppet-agent

Minimal changes were needed to puppet rules on my configuration server as I already had a lot of ‘exceptions’ coded for packages missing in CentOS8 so was just a case of changing all rules that refered to centos from … $facts[‘os’][‘name’] == “CentOS” and $facts[os][release][major] == “8” … to … ($facts[‘os’][‘name’] == “CentOS” or $facts[‘os’][‘name’] == “OracleLinux”) and $facts[os][release][major] == “8” …; up until this point I do not have enough mixed server OSs to use case statements like I should :-)

Once I had sorted out the new name Oracle uses for the ‘epel-release’ starting the puppet agent installed and configured all the applications and tools I use on a CentOS8 server without issue on the Oracle Linux server.

I have not tried puppetserver, just the agent. testing puppetserver will be a migration from C8 to OL8 on that machine and will probably be the last converted.

Other tools

Testing puppet also installed a lot of other tools such as the bacula-client, nrpe etc. which all seemed to start correctly.

I like to ssh into remote servers with X-11 forwarding on to run GUI apps. To allow that from a OL8 server install, “dnf -y install gdm”. It is enabled after install by default but to start it without rebooting “systemctl start gdm”. Annoyingly that makes the console GUI so also “dnf -y install gnome-terminal xterm” to get any work done… after rebooting the console behaves and remains in tty mode while gdm runs and allows remote users sshing in to run gui apps.

What I have not tested – disclaimer

As noted tools like bacula-client install and start correctly; however I do not have the storage available to actually test a backup works. Similarly with other applications I do not want activity generated from the OL8 test systems I have been throwing up and tearing down.

Just because an application starts does not mean it will work. But it seems promising.

Summary

A basic summary is that OracleLinux does seem to be able to replace CentOS from a fresh install point of view, everything seems to be there. Do not use the conversion scriot, see updates below.

Next steps

My next post is likely to be on running the conversion script created by Oracle on a C8 test system I fire up. They recommend disabling all non-core repositories and I have quite a few such as docker and puppet; so that will be interesting. I will try on a fresh C8 install, then a really complicated KVM instance, and see what happens. If it turns into a mess I will certainly post on it (assuming the mess is not on this webserver of course… backup time).

Update 2020/12/25 20:21: do not use the oracle provided conversion script. I created a test CentOS8 KVM instance from install media CentOS-8-x86_64-1905-dvd1.iso using a standard server install, no third party repositories were used. A git clone of the conversion script project (following the download script only links results in the webpage being saved rather than the script so you have to clone or download the git project) and running the script results in the test system rebooting into emergency mode; the test system is unusable. That script is truely a BETA. The error on boot is “Failed to switch root: Specified switch root path ‘/sysroot’ does not seem to be an OS tree. os-release file is missing.” and there will be a follow up post on my investigations into that.

So the summary update is, if you are willing to install from scratch and migrate applications Oracle Linux is an option for replacing CentOS servers; but despite posts made by Oracle on it being easy to convert a CentOS system to Oracle Linux it is not yet safe to attempt to do so. VM replacement rather than conversion is the only safe option… buit then if you are switching to Ubunto, Debian or SUSE you will be following the replacement rather than conversion path anyway.

Posted in Unix | Comments Off on Is Oracle Linux a viable alternative to CentOS

Still wanting to play those old SWF Flash files browsers no longer support ?

Want to be able to use those old Shockwave Flash files again ?

As we all know all browsers are dropping support for Flash due to its security risks. This can be an issue if you have a large collection of shockwave flash (SWF) games, or even something more useful lying about.

If you do have such files lying about they are not totally useless, as long as the SWF files are the old AV1 format (AV2 [actionscript 3] became available in 2006 and mandatory to use around 2013 so more recent files will not run yet).

Enter Ruffle (https://ruffle.rs/), a Flash Player emulator written in Rust and made available as webassembly and javascript for all modern browsers with webassembly support. This allows those old flash files to run in a sandboxed environment.

It provides, it order of what I find important

  • a web server can be configured to provide applicationtype wasm (some servers support it by default, apache must have it explicitly configured) and adding a single one line ‘script’ entry to any webpage is enough to allow the webserver to display working flash content to users without the users needing to install anything on their client machines/browsers to view the flash content
  • a browser extension to allow flash content to run on the client browser, for those sites that do not publish ruffle from the server webpage, so old flash sites will work for you again (although by now you will have probably un-bookmarked the old sites that were unusable for you until now)
  • a standalone command line client to play SWF files from your desktop

Ruffle is still in development. It does not have all the features of AV1 implemented and work on AV2 compatibility while begun is not likely to be available for quite some time; as Ruffle is of course being developed entirely by volunteers. As such it produces nightly builds so you will probably want to check regularly and get updates periodically. It is under active development as can be seen from the reported issues although most issues seem to relate to specific swf files.

In the list above order of importance for me is of course based on the fact I run a web server so wish to be able to server content without visiting browsers needing to have extensions installed. Having said thatI did have to trawl through old backups to find any flash files to test this with as I had cleaned them all off my server when flash support was dropped from browsers (grin). But find some I did, along with the old html that used to show them, and yes adding that one script line does make most of them availble to browsers again.

The standalone client for linux works well also.

I have not tried the browser extention as my webserver now serves the files in a usable way.

This is a very interesting project, so if you have any skills in Rust or javascript jump in and help.

For those running webservers that have old archives SWF content, you may want to look at Ruffle and un-archive any useful or fun files again.

It should be notes that I have not been able to get SWF files such as FlowPlayer and flvplayer to work under ruffle in order to play ‘FLV’ files rather than swf ones; but I still have to try exotic combinations such as trying to provide CAB files so it may not be impossible; and as mentioned Ruffle is still under development so they may just magically (after a lot of work by the volunteer developers) become usable at some point.

Posted in Unix | Comments Off on Still wanting to play those old SWF Flash files browsers no longer support ?