Setting up a CI pipeline on Linux for your home lab

Setting up a continuous integration pipeline for your home lab may seem like overkill, but if you have the memory available it is simple to do and if you have an interest in these tools why not :-). And it is so simple to do you should.

I must warn in advance that this post was done in a bit of a hurry as I have other things to do; bills to pay. So while I have tried to cover everything needed I will have missed bits. One example would be on how to tweak it so podman users can use the solution, I don’t use podman unless absolutely necessary so is you are using podman tough.

Requirements: You can have a VM instance with docker, a local container registry, a running gitlab-ce server, and the jenkins CI pipeline all installed on a VM with as little as 2vcpus and 8Gb memory. However note that if you want to integrate with kubernetes you should place that elsewhere as you really need another 4Gb memory minimum added to use that (even for minikube if you want to use kubevirt).

For those that are familiar with git every project lives in its own directory, that will not change if you run a local git server. You can be a little cleaner in that projects you are managing by git that you are not actively working on can be ‘pushed’ to your local git server and the local work directories removed (you can always pull them back from your local git server when you want to work on them again). But the only reason you would need a local git server is if you wanted to play with CI and don’t like the idea of updating a public source repository with multiple changes a day.

On the container front you should if you build images use the standard docker registry2 to provide a local insecure registry. This allows you to build container images from known ‘tested’ base images stored locally. Using the images from dockerhub means you cannot guarantee the build done today matches the build you did yesterday as base images on dockerhub are updated frequently; you need a local static base image.

Now with everything available in containers these days it all should be simple. And with a bit of simple customisation it is.

This post is about how to simply setup a CI pipeline in a home lab on a single VM. This post will cover the basic steps for

  1. Installing docker
  2. Setting up a local insecure container registry
  3. Installing and configuring the gitlab-ce container
  4. Installing and configuring the Jenkins container

Update: 2021/07/17 One warning: using containers ties you to that version; for example the gitlab-ce container jumped a major version and I cannot uograde without losing all my work. So a lot of effort to do if you update a container image later.

You do not have to install them all on a single VM, the benefit of doing so is simply that like containers being able to be moved about at will if your entice pipeline environment is on a single VM you can also easily move that about… if you have enough memory, a single VM install should work well with a VM using 7Gb memory and 2vcpus (allow 8Gb memory for headroom); if you don’t have that amount of memory available on a single machine by all means run the containers anywhere, the steps are pretty much the same. This post covers installing on a single VM, as I do move it about :-).

As the entire environment can be setup in containers on a single VM there is minimal effort needed to get a full CI pipeline for a home lab. However it should be noted that additional VMs (or additional packages added to the docker host allowing it to be more than just a docker build host) may be required.

For example if you wanted a pipeline to build a container using docker you could use the VM running jenkins as a build node, but if you wanted a pipeline to compile C code you would either need to install all the compilers on the VM, or simply add a new Jenkins node to another VM that had all those tools and configure your jenkins pipeline to use that node for that step.

This post will not cover the details of creating complex pipelines, although a few examples will be provided.

A couple of references you will see a lot that you will need to change in any example command shown are hostname entries

  • docker-local – my dns entry for the docker registry host server running the registry container, change it to the servername or ipaddr of your VM
  • gitlab-local – my dns entry for the gitlab-ce host server running that container, it must be a different name to that used to refer to the server (discussed later) [for this post the ip of the VM]
  • gitlab-server – my dns entry for the VM host [for this post the ip of the VM]

For the purposes of this post as I am discussing installing on one VM the above two hostname references would point to the same ip address, but they provide two different functions and should have their own dns entries in case you want to split them up onto seperate hosts later.

1. Installing docker-ce

Installing docker onto each different type of OS is fully covered on the docker website itself, you can find the instructions simply with a google search on “install docker-ce ” for example “install docker-ce centos”, the main page with all supported OSs listed is at https://docs.docker.com/engine/install/.

As you could be running any OS I won’t cover the install here, other than to point out additional steps that will be needed.

One note for Fedora users, the issue with cgroups2 being enabled on fedora by default is no longer an issue; docker-ce will now install and run on fedora without any OS settings needing to be changed.

On the assumption you will be using your CI pipeline to build container images you will not need docker or podman installed on any development desktops, just git to push your development builds into your pipeline. However if you are building containers on desktop for testing note that you would need to un-install podman (if installed) and install docker on all your development machines; there is no easy way of allowing podman to use an insecure registry.

After installing docker-ce into your pipeline VM…

  • ensure there is a ‘docker’ group created in /etc/group; if not create one
  • add userids that will need to use docker to the docker group to avoid the need to use root to do anything with docker (and I can’t remember the commands, I just ‘vi /etc/group’ and add them, bad I know). The group entry will look something like “docker:x:979:root,mark,anotheruser,anotheruser”
  • To allow any desktop/server you have just installed docker on to use an insecure registry ‘vi /etc/docker/daemon.json’ (if it does not exist create it) and add entries such as the below example
    {
      "insecure-registries" : [ "docker-local:5000","localhost:5000" ]
    }
    {
        "dns": ["192.168.1.179", "192.168.1.1", "8.8.8.8"]
    }
    

    If you are not using a local DNS server for your home lab that can resolve the servername you should use the ip-address in the insecure-registries entry such as “192.169.1.170:5000” rather than a servername such as docker-local. The :5000 is the port your docker container regisytry will listen on. If you are using a local DNS server then ensure you add the dns entry witk your local dns server and router in the list first also as if not set here docker will only use 8.8.8.8 and never resolve any local server names

The reason this post covers installing docker and omits any mention of podman is simply because the configuration file mentioned above in /etc/docker is of course for docker only; podman will not use an insecure registry.

One additional note for docker. By default deleting images using ‘docker image rm xxxxx’ on a machine running docker is not going to return any space. You should occasionally use the command “docker system prune -a” which will delete everything not in use on the client (anything not currently being used by a container) when issueing that command… repeat everything not being used by a container, make sure images you may want to use again are pushed to your local registry if they are not in use by a container when you run this command.

One additional tool I would recomend you install if you use docker is “docker-squash”. Iy you build your own container this is invaluable as it can squash dozens of layers of an image down to just a few and easlily halve the size of a container image; also I would generally run this against images downloaded from dockerhub as well as it can easily decrease the size of most of them as well. This can be installed with “pip install docker-squash” (or pip3/pip2 rather than pip if pip does not not default to the desired python version you have installed). For those that develop with podman rather than docker you will be upset to learn I am not aware of any equivalent tool for podman, which may be why many images on dockerhub are so needlessly large.

2. Setting up a local insecure container registry

First, this is easy to do. Secondly this is a long section as it also covers in brief some maintenance of your local registry you need to know that is not obvious or even mentioned in many posts about using a local registry2 instance.

For a home lab all that is needed is an insecure registry, this avoids the need to manage certificates for the registry. It allows every development server (that you manage) on your local network full access to the registry using docker if you set the insecure-registries section on each docker client machine as mentioned in the previous section.

(If using ‘podman’ tough luck, you need to look at the documentation on setting up a secure registry with valid letsencrypt certs, podman may be useful for stand-alone development on an internet facing machine using public repositores but is a pain to use on an insecure registry (maybe setup a local CA and use stunnel ?, or using an apache proxy as discussed later on in this section if you have valid certs for your webserver may work).

While the term “insecure registry” is used remember that each docker client machine using the registry must be explicitly configured to allow the insecure-registries, any client not setup to do so wil get an error like ‘http response where https expected’ whenever they try to use the registry, so access is still limited to clients configured for it.

The default port for an insecure registry is 5000 so on the server you will be running your local container registry on open port 5000 in the firewall. Then simply run the command below which will pull down the registry container and run it; and thats it you have a working local container registry. Read a few details below before running the command however !.

# A non-secure registry for local use
# The container will use /home/docker/data for storage rather than the default
# of an internal docker persistent volume.
docker run -d \
  -p 5000:5000 \
  --restart=always \
  --name registry \
  -v /home/docker/data:/var/lib/registry \
  -e REGISTRY_STORAGE_DELETE_ENABLED="true" \
  registry:2

In the regsistry start command above you will notice I included a parameter permitting images to be deleted, if you do not include that parameter you will never be able to delete images and the space used will eventually become extrememly large. Even with the parameter set deleting images is a multi-step process, you can delete as many images as you want and find zero space is reclaimed as they are just marked as deleted until you run the registry garbage collector to actually reclaim the space.

On space usage, if you exec into the registry container and use ‘df -k’ to see filesystem sizes you will see it uses all the space available on the filesystem. That may be something you do not want. You can limit the maximum size of space used by containers by modifying the docker config in /etc/sysconfig/docker-storage to add the line “DOCKER_STORAGE_OPTIONS= – -storage-opt dm.basesize=20G” however note this affects all containers you will start. It also needs to be done before you first start docker or you will have to delete and recreate all your existing containers.

On space again, well data really. Remember that containers do not persist storage across container restarts, as such you will notice that the example run command above uses a bind mount to a local filesystem directory; as such the storage options parameter is completely ignored for that filesystem as it gets the hosts freespace size, so why do it you ask ?. The reason is that by using a bind mount to the hosts filesystem the data remains when the container is stopped, and on a container restart using the same bind mount all the data is available for the container again. You would not want to push lots of images to your local registry and have them disappear every time you stopped the container; it does tie the container to the host with the filesystem so suitable for standalone docker not a docker swarm. It also allows the data to be backed up by your normal backup procedures.

Once you have decided what customisation you want to make, you can use the run command to start the registry containter to provide your local container registry.

To use the registry simply tag the container images with your local registry ip-address of servername and port 5000, for example my servername is docker-local so in order to store an image in my local registry you would tag it as in the example below for the ‘alpine’ image

docker pull alpine:latest                       # retireve an image (or use one you built yourself)
docker image list                               # you need the image id rather than the name
docker tag &ltimageidstring&gt docker-local:5000/alpine:latest    # tag to your local repository
docker push docker-local:5000/alpine:latest     # push to your local repository
docker image rm alpine:latest                   # untag the dockerhub entry

In the docker run command you would use the image name of docker-local:5000/alpine:latest to use the local copy instead of the dockerhub copy. Why would you want to do this ?, the answer is that you can guarantee each container you run uses a ‘known’ copy/version of the container image and it not going to be accidentally refreshed by a newer copy from dockerhub; so you can be sure each rebuild of a container in your CI pipeline is from a known base (and you you should refresh occasionally to get any security fixes… and retest everything again in your pipeline before depoying anything :-)). Another benefit of course is re-pulling images as needed from your local repository is across your local network, you don’t chew up internet bandwidth you may be paying for.

Should you have wisely used the parameter to allow images to be deleted from the registry in order to delete images you need to use the API, as that can get complicated I would recomend using a tool such as “git clone https://github.com/byrnedo/docker-reg-tool.git” which can then manage the regsitry using the command “INSECURE_REGISTRY=true ./docker_reg_tool http://docker-local:5000 [actions]” as the easiest way of listing and deleting images.

While it is fairly simple to view the contents of the local registry simply using a web browser with

see whats in the catalog
    http://docker-local:5000/v2/_catalog
    http://docker-local:5000/v2/_catalog?n=200       # default is max 100 entries to display , can change it with ?n=nn
assume one of the entries returned was 'ircserver' to see all available tags in the catalog for that
   http://docker-local:5000/v2/ircserver/tags/list

however if you have a lot of entries you are processing in a script using something like curl to get the list be aware the response may contain a link to a next page of responses your script would have to handle as well.

Deleting gets tricky and using a tool such as the one I refered to above makes it easier than trying to so it via creating your own custom scripts to post http delete requests. To do it yourself you would need to inspect the image you wish to delete to obtain the digest (ie: docker inspect my.docker.registry.com:5000/ubuntu:latest would in the response somewhere return something like RepoDigests”: [“docker-local:5000/ubuntu@sha256:74a1b5f5c5d771cf3570fa0f050e0c827538b7fe1873bc88b85d56818df3f2bc”]) and issue the delete with using that information (ie: curl -vk -X DELETE https://docker-local:5000/v2/mytestdockerrepo/manifests/sha256:66675d81b9bd5eafc105832b78abb91cab975bbcf028ca4bce4afe73f66914ee)… basically it is much easier to use an existing tool like the one mentioned earlier.

As I mentioned earlier issuing delete requests for images in your registry will not actually reclaim any space. In order to actually reclaim space used by the deleted images you need to exec into the registry2 container and run the garbage collector

docker exec -ti registry2 /bin/bash    # replace registry2 with what you named your register container
bin/registry garbage-collect [--dry-run] /etc/docker/registry/config.yml
exit

There may on occasion be a need for you to allow external users to “pull” containers from your local registry, which as they will not be configured to treat your registry as insecure will obviously get the ‘http response where https expected’ if they try to use it directly. This is easily fixed if you also run a website which has valid https certificates by simply proxying from your websites https site to the local container registry server, the external clients will then have a https connection and be able to pull from your registry.

Please note that I only cover “pull” by external clients here; personally I don’t see any reason why external clients should be pushing to your local registry so the example here denies the ‘push’ command. The example below should be added to a new conf file in your websites /etc/httpd/conf.d/ directory and will allow external clients to pull from your local repostory using the url you choose. I named my file /etc/httpd/conf.d/docker-registry.conf.

And you will notice I define a ‘push’ user; you really should not do that but an example should be complete :-). But if you are collaborating on products with users outside your local network you should be using a private project on one of the hosted git environments out there as they will (should) provide better security than anything covered in this port.

However with the configuration example below users can download from your private registry with commands such as “docker pull yourdomain.com:5043/imagename”. There will be no issues with complaints about ‘http responce when https expected’ as your webserver will be providing the https transport layer. Assuming you opened port 5043 on your server firewall and router port forwarding list of course. (tip: you can use your router to forward requests from port 5000 to 5043 (or any internal port) so treat 5043 as an example, not a recomendation).

# To create a push user...
# htpasswd -Bbn mark password >> /var/www/html/data/auth/httpd.htpasswd
# echo "pusher: mark" >> /var/www/html/data/auth/httpd.groups"
LoadModule headers_module modules/mod_headers.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule access_compat_module modules/mod_access_compat.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule unixd_module modules/mod_unixd.soListen 5043
<VirtualHost *:5043>
   ServerName your.websitename.org
   SSLEngine on
   SSLCertificateFile /etc/letsencrypt/live/your.websitename.org/fullchain.pem
   SSLCertificateKeyFile /etc/letsencrypt/live/your.websitename.org/privkey.pem
   SSLCertificateChainFile /etc/letsencrypt/live/your.websitename.org/fullchain.pem
   SSLCompression off
   SSLProtocol all -SSLv2 -SSLv3 -TLSv1
   SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
   SSLHonorCipherOrder on
   Header always set "Docker-Distribution-Api-Version" "registry/2.0"
   Header onsuccess set "Docker-Distribution-Api-Version" "registry/2.0"
   RequestHeader set X-Forwarded-Proto "https"
   ProxyRequests off
   ProxyPreserveHost on
   # no proxy for /error/ (Apache HTTPd errors messages)
   ProxyPass /error/ !
   ProxyPass /v2 http://docker-local:5000/v2
   ProxyPassReverse /v2 http://docker-local:5000/v2
   <Location /v2>
      Order deny,allow
      Allow from all
      # match realm to the 'basic-realm' used by default in the registry container
      AuthName "basic-realm"
      AuthType basic
      AuthUserFile "/var/www/html/data/auth/httpd.htpasswd"
      AuthGroupFile "/var/www/html/data/auth/httpd.groups"
      # Read access to any users
      <Limit GET HEAD>
         Require all granted
      </Limit>
      # Write access to docker-deployer only
      <Limit POST PUT DELETE PATCH>
         Require group pusher
      </Limit>
   </Location>
</VirtualHost>

Installing and configuring the gitlab-ce container

This is also extremely simple as it comes bundled in a container. The only things to note are as everything is bundled into it, as such it is cpu and memory hungry, allow 3Gb of memory for this container alone.

It is also important that your local DNS (or /etc/hosts file) contain a seperate hostname to be used when accessing the gitlab-ce container as you do not want to use the servers hostname, this is important. Why it is important is discussed a little further on when discussing configuring ssh to treat gitlab-local references differently to normal ssh sessions. Along the lines of my using docker-local in the examples I will call the server gitlab-server and use gitlab-local for referencing the ip-address when refering to the gitlab-ce instance so I would have a /etc/hosts entry similar to the below.

192.168.1.nnn gitlab-server gitlab-local

The container image also benefits from going through ‘docker-squash’ as the image size can be decreased quite a lot, so ideally ‘docker pull’ the image and squash it before use; you can tag it as your own local registry copy and push that to your local regsitry so you are always working from a ‘known copy’; although having said that gitlab does release a lot of security fixes so you will want to periodically pull down the latest again.

Like the registry container I would recomend (it is really a requirement) that a bind volume mount is made for the data so it survives a container restart; you don’t want to lose all your data on a container restart.

Yet again I recomend downloading and using docker-squash on the image, and saving it in your local container registry to ensure you are using a known working copy of the container image as below. Plus the dockerhub container image is huge !.

docker pull gitlab/gitlab-ce                                   # pull the image down
docker image list                                              # get the imageid of the gitlab-ce container
docker-squash -t docker-local:5000/gitlab-ce:latest imageid    # use the imageid for gitlab-ce from the list command
docker push docker-local:5000/gitlab-ce:latest                 # save so you have a fixed/stable base image
docker image rm gitlab/gitlab-ce                               # remove the origional image

The docker-squash (as of the image on dockerhub on 2021/Feb/13) will be squashed from 2.15Gb to 2.1Gb, so not necessarliy worth doing… but you should still store it in a local registry and use that copy as you do not want to download 2Gb+ every time the dockerhub image gets a 1byte change in a config file; of course still download a new image when security pathes are implemented, but do not keep checking for updates to the latest.

You can then run the container with the command below, if you did not move the image to your local repository, or did not bother to manually pull it down at all, just replace the last line with ‘gitlab/gitlab-ce’ instead of ‘docker-local:5000/gitlab-ce:latest’ to use the dockerhub native image but do change the hostname to what you are using.

export GITLAB_HOME="/srv"        # REQUIRED ON LINUX SYSTEMS
# The :Z on volume mounts is for selinux systems to ensure docker can create the mounted volumes.
docker run --detach --rm \
   --hostname gitlab-local.mydomain.org \
   --publish 443:443 --publish 80:80 --publish 5522:22 \
   --name gitlab1 \
   --volume $GITLAB_HOME/gitlab/config:/etc/gitlab:Z \
   --volume $GITLAB_HOME/gitlab/logs:/var/log/gitlab:Z \
   --volume $GITLAB_HOME/gitlab/data:/var/opt/gitlab:Z \
   docker-local:5000/gitlab-ce:latest

Logon to the gitlab-ce container with your web browser (use HTTP, it is using port 80, do not use https as you have not setup any certificates, and in a home lab you do nor really need to publist port 443 in the command above) and setup yourself as the admin id, create ssh keys for yourself and create your first project. When creating each project it provides the git commands needed to push any existing project you have to it or create a new one from scratch so it is simple to use.

There is one additional step you should do on all the desktop Linux machines you use for development. Whether you installed this into a VM or a physical machine there is the issue of ssh port 22 (which git also uses by default) being used by ssh already for the server, which is why we mapped port 5522 to port 22 in the example run command above. Apart from that you probably keept the ssh keys created for you to use for the gitlab-ce container in a seperate file. The git command provides no option to change the port number, and you also do not want to have to enter the keyfile name every time you want to do a git push.

The solution is to use a custom ssh entry, so assuming you use “gitlab-local.mydomain.org” as the name assigned to the ip-address of the host running your container, you want git to connect to port 5522 on that address, and use your private key file id_ed25519_local_gitlab for the gitlab server, you simply create a file ~/.ssh/config and add the entries below to specify the connection rules for that server (the final “Host *” entry just means all hosts that do not match prior rules use the defaults in ssh_config and behave as normal).

Host gitlab-local.mydomain.org
  IdentityFile ~/.ssh/id_ed25519_local_gitlab
  Port 5522
Host *

With that entry set when you set the remote/origin fields for your new project in git to gitlab-local.mydomain.org:YourName/yourproject.git you can simply ‘git push’ as normal without needing to remember you are using non-defaults for that hostname… although you should make a note of it somewhere or when you get your next development server you will be wondering why git commands using git@gitlab-local.mydomain.org are being rejected by the server instead of being redirected to the container :-).

That entry in ~/.ssh/config of course only applies at the user level, you could also modify /etc/ssh/ssh_config to apply the rule to all users, but as that is likely to be overwritten by the next system update it is not recomended.

You will also notice I used the default port 80 for the gitlab web interface. So on that note, remember to open port 5522 (for git push/pull) and port 80 (for the gitlab web interface) on the host VM/server so you can reach the container application :-). Unless you want to create tls/https certs for the gitlab-ce instance you do not need to use port 443 but I included it in the example command above for completeness.

Installing and configuring the Jenkins container

This is also quite simple; however I would recomend you perform a customisation of the userid used.

By default the jenkins container will create and use a docker volume for its storage, allowing data to be retained across container restarts. If you are happy with that and are comfortable you know how to clone/backup/restore docker volumes the customisation step is entirely optional.

However as you may have quessed by now I like bind volume mounts to keep the data on the host filesystem. As the jenkins container data survives restarts as it is on a docker volume the only real advantage for me in using a bind volume mount in this case is that the data is able to be backed up in my regular bacula backups and I don’t need to worry about the complexity of backing up a docker volume.

The issue with a bind mount in this case is that the jenkins container has the jenkins user defined as uid 1000 which is the first system user created on a linux server, so your VM host will already have a user using that uid and if you define multiple nodes it could be a different userid owning that uid on each server.

Your options are to chown the filesystem you have chosen as the bind volume mount to that existing user on the host plus chown agent filesystems on each node to whatever userid you wish to run a jenkins agent under; or what I prefer to do is create a new ‘jenkins’ user on the host to own the filesystem and alter the container image to use the uid of that jenkins user within the container and also create a jenkins user with the same uid on all servers that will provide node services.

The later option is the better option as it provides a common userid/uid across all jenkins nodes and on nodes used to build docker containers adding the user ‘jenkins’ to the docker group provides more documentation that it is jenkins using it than if you had to use whatever userid you chose to run the agent under.

It should be noted that this step of altering the uid in the container image only affects the container and the host volume mount. The agent on a node can be started under any userid… it makes sense however to start it under a ‘jenkins’ uid on each node.

To update the container image create a Dockerfile as below replacing 6004 with the UID of the user you assigned to the ‘jenkins’ user on the host machine, change in both the Dockerfile and the following command for the build step as I like to keep track of major divergence changes in the tag where possible

FROM jenkins/jenkins:lts
USER root
RUN userdel jenkins
RUN useradd -u 6004 -d /var/jenkins_home -M --system jenkins
RUN chown -R jenkins:jenkins /var/jenkins_home

Then in the same directory as the Dockerfile run the command ” docker build -t localhost/jenkins:lts-uid6004 . ”

You should then if you decided to use a local container repository get the imageid using ‘docker image list’, tag the image as ‘docker tag imageid docker-local:5000/jenkins-lts-uid6004’ and push the image to your local container repository as a backup, and should be used instead of the localhost one, and untag the localhost one.

When starting the container note the DNS setting flags used, you need them but update then for your local DNS servers, the jenkins container needs to be able to resolve the host names used for your docker registry, source repository gitlab-ce host, and any servers you will be adding as nodes (remember the container does not have access to to the /etc/hosts file on its host server and any docker container by default is set yo only use 8.8.8.8 (google dns) (although I suppose you could map /etc/hosts into the container somehow) so your life will be much easier with a local DNS server [if you do not already have one just pick one of your servers not doing much with an up-to-date /etc/hosts file and start dnsmasq to provide one].

You should also note that as well as port 8080 being used for the web interface port 50000 is the default connection port for any agents on agent nodes. So ensure you open port 8080 and 50000 in the firewall of the VM running the jenkins container.

Now you are ready to run the container, with the command below.

if [ ! -d /var/jenkins_home ];
then
   mkdir /var/jenkins_home
   chown jenkins:jenkins /var/jenkins_home
fi
docker run -d \
  --dns 192.168.1.179 --dns 192.168.1.1 \
  -p 8080:8080 \
  -p 50000:50000 \
  -v /var/jenkins_home:/var/jenkins_home \
  --name jenkins1 \
  docker-local:5000/jenkins:lts-uid6004

Note that you could also include the option “–env JENKINS_OPTS=”–prefix=/jenkins” to make the URL to access jenkins http://xxx.xxx.xxx.xxx/jenkins/ rather than the default of http://xxx.xxx.xxx.xxx/ which is worth a mention here as if you plan to have your jenkins instance accessable behind a reverse proxy you will need a path to proxy on.

The first time you start the container use the ‘docker logs &ltcontainer-name&gt’ command (ie: ‘docker logs jenkins1’) to view the log messages, there will be a startup configuration password logged that you will need to logon to jenkins the for first time.

Note that logging on the first time takes you via a plugins install screen where you get to install lots of recomended plugins before it lets you through to the screen where you can create your first admin user.

Jenkins, and most of the documentation found for Jenkins (even youtube training examples) provides lots of information on building java and nodejs pipelines using maven/gradle/ant etc. so I assume those tools are already provided in the jenkins container image which is the ‘master’ node. I however am more interested in pipelines for building containers and compiling and packaging C programs for which it is hard to find documentation, so I have included a few examples at the end of this post.

You will need to create nodes to do any real work, for example to pipeline docker container builds you will need a node with docker installed. Setting up this first node example uses the host VM as a node as because you are running jenkins in a container under docker that server can immediately provide a docker build node :-).

In the ‘manage jenkins’ page there is a node management page, use that to define a new node which is the host running your container under docker. In the “labels” field give it a label indicating it can provde docker such as a label “docker-service”. While you can target a pipeline at a specific node/server it is better to target the pipeline at a label as you may add additional nodes later that can also provide docker. Select that you want to start the agent with a script.

At this point displaying the node will get to a page that has a button to start the agent and the java command that will be used is shown; the page should be visited and the button clicked from a browser on the server to run the node agent; guess what, the button won’t work for linux as the browser will instead want to download a jnlp file as it doesn’t know what to do with it, personally I think this is a good thing, do not start it from the browser !.

You will see that page the ‘agent’ text in the message is a link to a agent.jar file, so on your node server simply wget the url to pull the agent.jar file onto the node server.

At this point I would (having as recomended created a ‘jenkins’ user on every node) rename the agent.jar to jenkins_agent.jar and move it to /home/jenkins/bin plus create a script file containing the run command in the same directory. And always manually start it as the jenkins user… I say manually rather than in a systemd or init.d script as if the jenkins container is not running it will hammer your network (and log files) with connection retry attempts; start only when you are intending to use it and stop it when the jenkins container is stopped.

This is why starting it from the page in the web browser with a button click is a bad idea, it would run under the userid of whoever was running the browser session. When starting it manually never start the agent under the root user or a malicious Jenkinsfile could contain a command like ‘sh “/bin/rm /*”‘ which would work; limit the access the user starting the agent has.

Before the above step it is worth pointing out at this point that if setting up the docker node on the server running the jenkins container as the jenkins user was created with a home directory of the bind volume mount you will need to create a /home/jenkins on this server, on all the other nodes you would create the user with a home directory of /home/jenkins.

As this node will provide docker to pipelines remember to add the user jenkins to the docker group on the host server so the jenkins user can actually use docker.

You can then the ‘su’ to the jenkins user and run the command to start the agent, it will then show as available in the jenkins node list.

One additional note, that works immediately as you are effiectively on localhost. For adding additional nodes you will need to open firewall port 50000 on the host running the jenkins container to allow agents from other machines in your network to connect to your Jenkins container when they are started.

At this point you may be tempted to add all the software you could ever need to your VM running these CI containers and use only this one additional node to do everything. I would say don’t, you are probably just duplicating what you already have by doing so.

For example you may have a linux desktop where you are already doing a lot of GNU C/C++ development work, rather than install all the compilers onto the CI host or into a dedicated VM for a compile environment (and spending ages tracking down missind dependencies you installed on your development desktop and forgot about) just perform the extremely trival task of making your desktop a jenkins node as you already know it has all the tools needed.

And you will need other nodes for other functions as time goes by, for example I only build RPM packages at present but with CentOS no longer stable I have fired up a Debian server and are moving to building DEB packages instead; as like many people I do not plan on remaining on any RHEL stream until a viable non-vendor owned alternative to CentOS appears.

It is also worth mentioning source repositories here. We have installed gitlab-ce in a container and you will probably do most of your CI pipeline work from there, as that was the entire point of this exercise. However there is nothing to stop you from within your new jenkins container creating pipelines from other repositories such as the public github and gitlab sites as well should you also have existing github or gitlab repositories (which you should leave there if you do as a offsite backups are always a good thing).

Personally I make the source repositories available for public download as I don’t need to edit any of the files via the gitlab-ce interface and cannot be bothered with creating credentials. If you use private repositories you will need to set credentials for each source repository used when creating the pipeline.

It is also worth mentioning pipelines and nodes, specifiaccly what if you need to use multiple nodes in a pipeline. It is not always possible to perform all activities for a pipeline on a single node, for example I can do all compiles and rpm creation on a single node but from that node have to scp the resulting rpm to another node where the next pipeline step will run on that seperate node to move the rpm into a public repo and rebuild the repo index; to do so you must be aware that every ‘step’ in the pipeline will download the git project in its entireity, so even on the last node where it does nothing but need to move a file and run a createrepo command the entire git project is downloaded to that node anyway even though it will never be used. So unless you do require multinodes in a build do not select nodes at a step level but use a global node selection for the entire pipeline so the project is only downloaded once. Using multiple nodes can be inefficient not just for that reason but as you must move (scp) the stages of work between the nodes as needed you are really needing in the pipeline to refer to nodes by node name (hostname) rather than labels so you know exactly where to move things about, and where possible labels are preferred in a pipeline in preference to node names.

Also on pipelines, you would normally use a Jenkinsfile to define a pipeline, if you create a project expecting one and you add a source that does not have one in the project no build will occur; worth remembering if you wonder why your pipelines do not build.

As promised, a few Jenkinsfile examples

First a note on comments. You may use C style comments such as “/* multiline comments */” or for single line comments use “//” at the start of each line being commented. Mentioned as comments are always important :-).

Also it is important to note that these are examples; not working files (well they work perfectly well, but you need all the backend files in my source repositories for them to work for you, as there is a lot more to a pipeline than just a Jenkinsfile; you need all the code as well :-)..

The below Jenkinsfile will build a container image, the image will be tagged with the build number. Only when you are happy would you tag an image as ‘latest’ or give it a production version number and make it available.

Note that it requires that you have defined a node that has a label ‘docker-service’, in the setup steps above it was briefly discussed how to setup the host server running docker that provides the environment for the jenkins container as a docker-service node; but you can add as many as you want as long as one of the nodes are available.

pipeline {
   // Must run the build on a node that has docker on it
   agent { label 'docker-service' } 

   stages {
      // build the docker image
      stage("build") {
         steps {
            sh 'docker build -t localhost/ircserver:$BUILD_ID .'
         }
      }

      // squash the layers in the docker image
      stage("compress") {
         steps {
            echo "--- Squashing image ---"
            sh 'bash ./squash_image.sh localhost/ircserver:$BUILD_ID'
         }
      }

      // push the new image to the local repository
      stage("publish") {
         steps {
            // I would want to tag it as latest, and that is done by id not name
            echo "--- Would deploy all, but I won't, only new master branches ---"
            sh '''
               id=`docker image list | grep 'localhost/ircserver' | grep $BUILD_ID | awk {'print $3'}`
               if [ "${id}." != "." ];
               then
                  if [ "$BRANCH_NAME." == "master." ];
                  then
                     docker image tag ${id} docker-local:5000/ircserver:latest
                     docker push docker-local:5000/ircserver:latest
                     echo "Pushed to registry docker-local:5000 as ircserver:latest"
                     docker image rm docker-local:5000/ircserver:latest
                  else
                     docker image tag ${id} docker-local:5000/ircserver:$BRANCH_NAME
                     docker push docker-local:5000/ircserver:$BRANCH_NAME
                     docker image rm docker-local:5000/ircserver:$BRANCH_NAME
                  fi
                  # No longer needed on worker node
                  docker image rm localhost/ircserver:$BUILD_ID
               else
                  echo "No container image found to re-tag"
               fi
               # end of sh script
            '''
         }
      }
   } // end of stages

   post {
      // run if there was a failure, may be image layers left lying around
      failure {
         echo "*** Manully remove container and image workspaces ***"
      }
   }
}

The below example can be used to compile a C program. Ideally you would run it through a few “test” steps as well before packaging, but as they would be specific to your program you will need to add those yourself.

This example uses a global agent of none and uses step level selection of agents to run steps across multiple nodes.

pipeline {
   // must run on a server with gcc and package building commands on
   // then must run on public facing repo server to update the repo listings
   agent none
   environment {
      APP_VERSION="0.01"
      REPO_SERVER="10.0.2.15"
      REPO_DIR="/var/tmp/testrepo"
   }
   stages {
      stage("build") {
         agent { label 'gcc-service' }
         steps {
            sh 'echo "I AM IN $PWD"'
            sh 'make testprog MK_BRANCH_NAME=$BRANCH_NAME'
            echo 'What happened ?, if it worked well on the way'
         }
      }
      stage("package") {
         agent { label 'gcc-service' }
         steps {
            echo "Need to place rpm packaging commands in here"
            sh '''
               env
               echo "Still in directory $PWD, filelist is"
               machinetype=`uname -m`
               touch projectname-$APP_VERSION.${machinetype}.rpm
               ls
            '''
         }
      }
      stage("pushtorepo") {
         agent { label 'gcc-service' }
         steps {
            echo "Need to place scp commands here to copy to repo server"
            sh '''
               bb=`whoami`
               machinetype=`uname -m`
               echo "would run: scp projectname-$APP_VERSION.${machinetype}.rpm ${bb}@$REPO_SERVER:$REPO_DIR/${machinetype}"
            '''
         }
      }

      stage("creatrepo") {
         agent { label 'yumrepo-service' }
         steps {
            echo "And why so messy, parts must run on different servers"
            echo "Would run: cd $REPO_DIR;creatrepo ."
         }
      }
   } // end stages
}

About mark

At work, been working on Tandems for around 30yrs (programming + sysadmin), plus AIX and Solaris sysadmin also thrown in during the last 20yrs; also about 5yrs on MVS (mainly operations and automation but also smp/e work). At home I have been using linux for decades. Programming background is commercially in TAL/COBOL/SCOBOL/C(Tandem); 370 assembler(MVS); C, perl and shell scripting in *nix; and Microsoft Macro Assembler(windows).
This entry was posted in Automation, Unix. Bookmark the permalink.