Quick install of AWX (ansible-tower upstream) into Docker

AWX is the upstream open source project for ansible-tower. If you are interested in ansible-tower it is obviously a good starting point, as it is the starting point for ansible-tower. It is also free to use, not just free as in open source but in that you do not need a RedHat account to use it.

AWX now likes to be installed into a Kubernetes environment using awx-operator; that makes it damn hard to be workable in a home lab situation unless you want to spend a lot of time trying to work out what the installation configuration files are doing and changing them to make them usable (ingress/egress/dns etc).

It is far simpler to get a full working environment using Docker, even if the Docker method has been depreciated. This post explains how to get it installed and working using Docker. There are customisations needed to make it useful for a home lab which are covered here; as as you step though them you will understand why I chose Docker for my home lab, it would be a lot more work to make it usable under Kubernetes (even under MiniKube).

These instructions are valid as of 16th October 2021.

This post also assumes, or at least hopes, you have been using ansible from the command line quite happily already so you understand the last step on SSH keys :-).

But first, why would you want to install AWX ?. The only possible reason would be that you want to get an idea of how ansible-tower works, there is certainly no benefit for any small organisation or for your own home lab over the command line; and it is actually a real step backward for a home lab as you really need a spare machine with at least 6Gb ram running either docker or kubernetes (even to get it to run a playbook with a simple shell ‘ls’ command), plus it makes it a lot harder to do things as it expects by default all resources (EE containers, playbooks etc.) to be provided across the internet and never locally (not even local network)… although with a lot of effort you can change that and some of the steps are covered in this post.

Yes it does have user access controls, organisations, groups etc. to limit who can do what isung AWX or ansible-tower. It does not in any way stop anybody with half a clue from simply bypassing all that and using the command line.

However if you want to learn how to use ansible-tower without going anywhere near a RedHat subscription/licence AWX is the place to start.
And if that is what you are interested in then read on.

1. Follow the install instructions from the AWX github site

First get it up and running, thats simply a case of (almost) following the instructions for installing under Docker on the github site. The difference is do NOT use the “-b version” tag to select the latest version, for 19.4.0 at least that pulls down a git clone with no branch head and simply doesn’t work, so omit the “-b” tag and the instructions are simply…

git clone https://github.com/ansible/awx.git
cd awx
vi tools/docker-compose/inventory   # set pg_password,broadcast_websocket_secret,secret_key etc
dnf -y install ansible              # MUST be installed on the build host, used during the build
make docker-compose-build   # creates docker image quay.io/awx/awx_devel
make docker-compose COMPOSE_UP_OPTS=-d     # starts awx, postgresssql and redis containers -detached

Wait until everything is up and running and then do the next steps

docker exec tools_awx_1 make clean-ui ui-devel            # clean and build the UI
docker exec -ti tools_awx_1 awx_manage createsuperuser    # create the initial admin user, default of awx is recomended
                                                          # note: I ran the above twice to create a personal superuser id also
docker exec tools_awx_1 awx-manage create_preload_data    # optional: install the demo data (actually, it may already have been installed)

At the end of the installation you have a running AWX environment, but it has some major limitations we will be fixing below.

At this point you can use a web browser to port 8043 on the Docker host and test the superuser logon you created works, but there is more work to do before going any further.

2. Customisations needed

Do NOT use the “make” command to start it again from this point, or it will destroy any customisations you make to the yaml file by re-creating it.
The install creates persistent docker volumes for the data so after the install we just need a custom script to stop and start the environment rather than using the Makefile.

Simply create a script as below

cat << EOF > control_script.sh
#!/bin/bash
case "$1" in
	"start") cd awx_default
		docker-compose -f tools/docker-compose/_sources/docker-compose.yml up -d --remove-orphans
		;;
	"stop") cd awx_default
		docker-compose -f tools/docker-compose/_sources/docker-compose.yml down
		;;
	*) echo "Use start or stop"
		;;
esac
exit 0
EOF
chmod 755 control_script.sh

Note we are still using the generated yaml (yml) file, but by using the script rather than the “make” command it will not be overwritten.

As installed you cannot use the “manual” playbooks from the filesystem but only from a remote source repository (which has its own problems discussed in the next bit). You may also want to use your own in-house modules.

To use the manual playbooks you need to have a bind mount to the docker host filesystem, so “vi tools/docker-compose/_sources/docker-compose.yml“, in the volumes section add a additional entry to the existing long list as below

     - "/opt/awx_projects:/var/lib/awx/projects"

Then “sudo mkdir /opt/awx_projects;sudo chmod 777 /opt/awx_projects”. Ok, you would probably chown to whatever userid is assigned to UID 1000 rather than set 777 but whatever works.

After restarting the containers you can now place manual playbooks in the /opt/awx_projects directory on the Docker host and AWX will be able to find and use them from the containers /var/lib/awx/projects directory. But don’t restart the containers yet :-).

Also under the projects directory you would normally have directories for individual projects to contain the playbooks for that project. If under each of those project directories you create a new directory named /library you can place any customised or user written modules you want to run in those playbooks (ie: if you have projects/debian and projects/centos you need /projects/debian/library and projects/centos/library; it is within the individual projects levels you create the library directory not the top level projects directory level for AWX). But worth noting as that is the easiest way of implementing your own custom modules.

The second issue is the resolution of host names; it will work fine if you only want playbooks from internet source repositories and only want to manage machines that are resolvable by the world wide web DNS servers but what if you want to use local docker and source management repositories, and more importantly you need to resolve host names of machines on your internal network if you want AWX to manage them.

On the local docker image repository side; at this point AWX will not use insecure private repositories which is a bit of a pain. However to resolve your own internal hostnames you need to reconfigure Docker to use a DNS that can resolve those hostnames. That is simple in Docker, simply “vi /etc/docker/daemon.json” and insert something like the below

{
  "insecure-registries" : [ "docker-local:5000" ],
  "dns" : [ "192.168.1.181" , "192.168.1.1" , "8.8.8.8" ]
}

In the DNS list above is one of my DNS servers (I use dnsmasq) that can resolve all my server hostnames, my router and google DNS to resolve all the external sites. Customise the list for your environment.

Now at this point you will want to restart docker (to pick up the daemon.json changes) and the containers (for the new volume bind mapping), so assuming you created the control_script.sh above.

./control_script.sh stop
systemctl stop docker
systemctl start docker
./control_script.sh start

When the containers are running again “docker exec -ti tools_awx_1 /bin/sh” and try pinging one or more of your local servers to ensure name resolution is working.

3. SSH keys, credentials

If you have been using ansible normally from the command line before you will be aware that part of the implementation is to create SSH keys, the private key on the ansible ‘master’ server and the public key placed on every server to be managed. The good news is that you can just create a “credential” in AWX for the ansible user and paste that same private key into it.

When setting up the command line ansible you had to ssh to every server to be managed from the ‘master’ and reply Y to the ssh prompt; AWX handles that prompt when it tries to run a job on any server that hasn’t been done for yet which is the only time saver over command line. Having said that you still need the public key on every server to be managed so you need to setup a playbook to do that anyway (personally I use puppet for such things so the keys already existed on all my servers).

So for hosts to manage add the new host to an inventory (maybe create some custom inventories first) and in AWX go to inventories->hosts->runcommand [the run command option is only available if you go into an inventory and list the hosts, not from the hosts page] and select the ‘shell’ module and a simple command such as ‘ls’ to make sure it works (use the default awx-EE and the credentials you added the SSH key to). It should just work… although see below.

4. Performance compared to simple command line

The latest versions of AWX use “execution environments” to run commands. This involves spinning up a new container to run each command.

Great for isolation, bad for performance. A simple shell module “ls -la” command from the command line is done in seconds, from AWX it takes few minutes to do the same simple command as it needs to start an execution environment if one is not running (and even download it if there is a later version container image; tip after the first pull of the contianer image change pull to never).

Inventories, ansible hosts files

If you have been using ansible from the command line you probably have quite a few, or even one large ansible hosts file with lots of nice carefully worked out groupings.

This may be a time to split them out into seperate inventories if you have not already done so, for example a hosts file for debian, one for centos, one for rocky, one for fedora etc. depending on how many entries you want to show up in a GUI display of an inventory.

Or you could just create one inventory and store the entire hosts file into that one inventory. By store of course I mean import.

Copy your current ansible hosts file from /etc/ansible (or as many different hosts files as you have to load into seperate inventories) to the awx_tools_1 container. Create an inventory in AWX for each hosts file, and use awx-manage to import the file. The invetory must be pre-created in AWX before trying to import.

For example I created an inventory called internal_machines, did a quick “docker exec -to awx_tools_1 /bin/sh”, in the container a cd to /var/tmp and “vi xxx”; I did a cat of my existing ansible hosts file with all its hosts and groupings and pasted it into the “vi xxx” file on the container session and saved it. Then simply…

awx-manage inventory_import --inventory-name internal_machines --source xxx

My ansible hosts file was in INI format (lots of [] sections rather than json or yaml), the import command will add all the hosts and groups to the inventory (and the hosts to the hosts enries of course).

It is important to note that you should have run a test job (such as an ‘ls -la’ as mentioned above) against a server you have manually added first in order to start at least one execution environment container first; otherwise the import job will time-out waiting for the container to start and fail to import anything.

Issues

The default execution environment awx-ee:latest seems to want to download for every job even though I have the EE settings to only pull if missing. A major issue with this is that often a job will fail with being unable to pull it from quay.io ‘with failed to read blob’, repeatedly retrying will eventually work. The real major major issue is not being able to use a local insecure registry to store a local copy (and even create custom EE images) for local use which would immediately alleviate it as a major issue. And EE’s don’t seem to share images even though they are identical images, for example currently jobs run on the control-plane EE using awx-ee:latest with no problems but the AWX-EE EE using awx-ee:latest is trying to download it again.

Basically unless it becomes really easy to decouple from the internet and use local container image repositories, local git sources, local playbooks etc. its pretty useless unless you are happy with a 2Gb download every time you want to run a simple “ls” command. Expect the same issues with Ansible-Tower which wants to use RedHat repositories, you need to decouple or at least make your local sources of everything the first place searched.

Irritations

It is easy to mess up groupings when trying to manually add them, and if you do a job that runs and ends with “no matching hosts found” that job is marked as a sucessfull job which I personally thing should be a failure.
Under both “schedules” and “management jobs” only the supplied maintenance jobs are available and there does’t seem to be an easy way to add user schedules, therefore it seems pointless to go to all this work when for automation the command line interface run via cron provides better automation.

AWX and it’s downstream ansible-tower provide good audit trails on what defined to the tower application user changed what and ran what within the tower but there is of course no auditing within the tower of who changed what git playbooks (although git will record that) and of course manual playbooks on the filesystem are by their nature unaudited. Sooo, auditing is not really a good reason to use it. Also it has a large overhead, needs a minimum of 2cpus and 4gb of memry (AWX does, ansible-tower needs more) just to run.

And how to import custom modules I’m still looking for as far as AWX goes; easy to use custom modules from the command line. As mentioned earlier in the post you can create a ‘library’ directory for each project, you could also inject into the docker run command the ansible environment variable to specify library search path and create another volume bind to that directory on your host, but really you would want your modules in the same git repository as your playbooks… but as this post has shown how to setup project playbooks on the local filesystem I suppose it’s not really relevant to this post.

Of course AWX/Tower allows creation of individual users/groups/projects/organisations… just like a well laid out filesystem structure and use of the unix users and groups files. Of course reporting on such a layout seems a bit tricky in AWX where on a filesystem you could just use “tree” to display the projects structure and contents.

Far easier to run ansible from the command line using filesystem based playbooks (my filesystem playbooks are git managed on the host so no advantage in using AWX or ansible-tower there) which has absolutely no overhead requirements other than whatever memory a terminal or ssh session takes. Plus no requirement on Docker or Kubernetes (no execution environment containers), just small fast and simple and if you want regular jobs scheduled by cron.

However if I can one day get insecure registries working for EE images, and figure out how to get playbooks and custom modules deployed from my local gitlab server (rather than having to use the filesystem volume mount) it may be useful; and I might even update this post with how. However by useful I mean as a learning experience, my home lab doesn’t have a spare 2Gb memory I can dedicate to actually using it when I find puppetserver so much more powerfull.

About mark

At work, been working on Tandems for around 30yrs (programming + sysadmin), plus AIX and Solaris sysadmin also thrown in during the last 20yrs; also about 5yrs on MVS (mainly operations and automation but also smp/e work). At home I have been using linux for decades. Programming background is commercially in TAL/COBOL/SCOBOL/C(Tandem); 370 assembler(MVS); C, perl and shell scripting in *nix; and Microsoft Macro Assembler(windows).
This entry was posted in Automation, Unix. Bookmark the permalink.