Setting up and using a local (insecure) Docker registry service container

The first thing to note about this post is that it sets up an insecure local registry facility using the standard registry container. This type of registry is ideal for local ‘internal network’ development use. It is also suitable for standalone docker, no need for a docker swarm.

An important thing to note is that the regsitry server itself is not insecure, it wants TLS traffic by default; however it supports insecure traffic if the client requests it. To use insecure access to the local registry server it is the clients of the registry server that are reconfigured to request insecure communication; and the registry server will permit it.

Configuring an insecure Docker registry allows anyone to ‘push’ images to your registry without authentication, so it must only be used for internal use; never internet facing.

You will notice the official documentation for installing a docker registry container as an insecure installation states that not even basic authentication can be used for an insecure configuration. The may actually be incorrect as the configuration document at https://docs.docker.com/registry/configuration/#htpasswd states basic authentication can be configured without TLS… although the user/password information will be passed in clear text as part of the http header so it seems it is only recomended not to use it.

Installing a local Docker registry

A local registry server can be installed on any server already running Docker or docker-ce. How to setup a local registry server is covered at https://docs.docker.com/registry/deploying/ however it is a little fuzzy on configuring insecure traffic.

The actual document on configuring for insecure use is https://docs.docker.com/registry/insecure/ but it omits the rather import detail that the “insecure-registry” settings must be set on all the clients, not on the docker server running the container. There is a lot of confusion about that easily seen from all the questions about it in forums where everyone assumes it is set on the server providing the registry container; it is set on all the clients. Also note in this document it does state that secure traffic will always be tried first in all cases, the “insecure-registries” entry just allows fallback to insecure traffic, so changing your registry to be secure at a later time is trivial.

It is also important you are aware that by default the container running the registry uses volume persistent storage within the container; this means that while it will survive stopping/starting the container should you delete the container your data will be lost. If you intend to be deleting/reconfiguring the container a lot you probably don’t want to use that default.

I implemented my local registry container to use a directory on the docker host filesystem. The run command I used is below, the importance of the REGISTRY_STORAGE_DELETE_ENABLED environment variable I will discuss later in this post in managing the registry; you would probably normally leave it disabled (the default).

docker run -d \
  -p 5000:5000 \
  --restart=always \
  --name registry \
  -v /home/docker/data:/var/lib/registry \
  -e REGISTRY_STORAGE_DELETE_ENABLED="true" \
  registry:2

Remember to open firewall port 5000 on your registry docker host, and if any client docker hosts have rules blocking outbound traffic ensure the port is opened on those also.

Configuring the Docker client servers for insecure traffic

The “insecure-registries” setting needs to be configured on the servers running Docker that are to be clients of this local registry. On those servers add to (or create if it does not exist) the file /etc/docker/daemon.json and restart the docker service on those clients when you have done so.

{
  "insecure-registries" : [ "hostname-or-ipaddr:5000" ]
}

Of course if you also wish to use the registry on the Docker server running the regisry container set it there also.
If you have a local DNS server you should use the hostname rather than an ip-address.

Pushing and Pulling using your local Docker registry

In the examples here I have the regsitry container running on a host with a DNS entry of docker-local.myexample.org

To push images to your local registry they must be tagged to refer to your local registry. For example if you have an image named busybox:latest you would

docker tag busybox:latest docker-local.myexample.org:5000/busybox:latest
docker push docker-local.myexample.org:5000/busybox:latest

If you get an error along the lines of https expected but got http check your client insecure-registry entries again. On the client the command ‘docker info’ will list the insecure registries configured near the eand of the reponse.

It is also important to note that the tag is a pointer to what is already there, you could in the above example ‘docker image rm busybox:latest’ (which only removed the old pointer) and change your docker run command to run docker-local:5000/busybox:latest instead of busybox:latest which would work perfectly well.

If you have a hostname:port/ in the image name that hostname:port is the registry used, if omitted only the registry at docker.io is used; which you obviously do not have permission to push to.

Once you have images pushed to your local registry you can pull them with the same syntax

docker pull docker-local.myexample.org:5000/busybox:latest

Managing your local registry

Querying your local registry

Useful commands such as ‘docker search’ only apply to the docker.io registry. You obviously need a way of managing your local registry and keeping track of what is in it.

The ‘v2’ interface of the Docker registry container provides a way of looking up what is in your local registry. This can be done with any web browser.

Remembering that we have no certificates and are using an insecure registry the following URLs are useful for determining what is in your registry. The examples use the same example hostname and image.

To see what is in the local registry

    http://docker-local.myexample.org:5000/v2/_catalog
    http://docker-local.myexample.org:5000/v2/_catalog?n=200

Note the second example above, the default response is only the first 100 entries; that can be changed with the n value. If using the default and there are more than 100 entries a link to click on to provide the next 100 entries is provided in the response.

The above URL only displays the image names. You will need to see what tagged versions are stored in the registry. Should yiu have an image names ‘busybox’ and want to see all tagged versions the URL would be

http://docker-local.myexample.org:5000/v2/busybox/tags/list

Of course there are scripts on github that make all that a lot easier and can be run from the command line where most Unix developers work. One is discussed below.

Deleting images from your local registry

The Docker registry ‘v2’ API provides a DELETE facility. As it is possible to corrupt images if you delete incorrect layers it is better to use some of the utilities users have made available on github for that purpose.

I would suggest, and the examples below use, this utility…

cd ~
mkdir git-3rdparty
cd git-3rdparty
git clone https://github.com/byrnedo/docker-reg-tool.git
dnf -y install jp

The below examples of using the script obviously use different examples than busybox above to provide a few more entries to play with; but using the script is fairly simple as seen below. Note that we use “INSECURE_REGISTRY=true” as we have setup an insecure registry; if using TLS there are parameters to provide certs and credentials which are explained on the github page.

[root@gitlab ~]# cd ~git-3rdparty/docker-reg-tool
[root@gitlab docker-reg-tool]# INSECURE_REGISTRY=true ./docker_reg_tool http://docker-local:5000 list
ircserver
mvs38j
[root@gitlab docker-reg-tool]# INSECURE_REGISTRY=true ./docker_reg_tool http://docker-local:5000 list ircserver
f32
f30
[root@gitlab docker-reg-tool]# INSECURE_REGISTRY=true ./docker_reg_tool http://docker-local:5000 delete ircserver f30
DIGEST: sha256:355a3c2bd111b42ea7c1320085c6472ae42bc2de7b13cc1adf54a815ee74fa45
Successfully deleted
[root@gitlab docker-reg-tool]# INSECURE_REGISTRY=true ./docker_reg_tool http://docker-local:5000 list ircserver
f32
[root@gitlab docker-reg-tool]#

However for the delete example above you will most likely get a response as below

[root@gitlab docker-reg-tool]# INSECURE_REGISTRY=true ./docker_reg_tool http://docker-local:5000 delete ircserver f30
DIGEST: sha256:355a3c2bd111b42ea7c1320085c6472ae42bc2de7b13cc1adf54a815ee74fa45
Failed to delete: 405

Refer back to by ‘docker run’ command above, and the parameter I said I would explain later, specifically the environment parameter ‘REGISTRY_STORAGE_DELETE_ENABLED=”true”‘. If that parameter is not set it is not possible to delete entries from the registry… so for a local regsitry you should probably set it unless you intend to keep infinate amounts of images in your local registry.

Reclaiming space in the registry filesystem

Using the registry ‘v2’ API to delete an image/tag from your local registry does not delete anything except the references to the object. This is normally ideal as it preserves layers; for example if you have seven images based on centos:8 they can share the same base layer to minimise space, deleting one of them removes just the pointer. If all are deleted the layers remain as you may push another centos:8 based image in which case a pointer can be re-added rather than the push having to send all layers of the new image.

However there will be occasions when you do want to remove all unused objects from the registry. For example your local development may have been using f30 base images but all development has moved to f32 so you want to reclaim all space used by the old f30 layers.

In order to do so you need to run the registry garbage cleaner to remove the obsolete objects.

The registry garbage cleaner is provided with the docker registry image, and must be run from within the running container.

It should only be run when the registry is not in use; in read-only mode or with user access locked out in some other way. While there is nothing to stop you running it while the registry is in active use any ‘push’ command in progress while the registry garbage cleanup is in progress will result in a corrupt image being stored.

With all that being said, to change to read-only-mode is a case of stopping the regsitry container/deleteing the registry container/redefining-starting the registry in read-only-mode/doing the garbage collect/stopping the registry container/deleting the registry container/redefining starting the registry on write mode again… and if you are going to do all the work you may as well throw in an extra set of stops/deleted/reconfigures to turn on/off the ability to delete entries. Personally I think for a local development registry that is too much complication and I leave registry storage delete enabled and do not switch to read-only-mode (if you need to ensure read-only much simpler to disable the firewall rule allowing access to port 5000 and add it back when done).

To actually perform the garbage reclaimation; on the server hosting Docker container for the registry simply

docker exec -ti regsitry /bin/sh

bin/registry garbage-collect --dry-run /etc/docker/registry/config.yml
exit

Obviously remove the ‘–dry-run’ flag when you are ready to really perform the garbage cleanup.

Some notes on restricting external access, apache proxy

As noted earlier using an insecure registry supposedly prevents any authentication method being used. While easy to switch to TLS that does not magically enable authentication, a lot of extra work is required.

Setting up basic authentication is reasonably easy, and it seems this can be done in clear traffic without TLS if you really want to. However that will the require all your users to authenticate for not just ‘push’ but also for ‘pull’ requests (via ‘docker login’). That limits it’s usability as ideally users should be able to pull anonymously or why bother making it available to them in the first place.

The simpleset way of setting up authentication where users that ‘push’ must authenticate but anyone can ‘pull’ without authentication would seem to be using Apache as a authenticating proxy, by playing with the recipe provided at https://docs.docker.com/registry/recipes/apache/; changing the restriction on GET to allow everyone should do it. And of course create a conf.d file for the virtualhost in your existing web server configuration rather than use the httpd container the example uses. This however still uses basic htaccess authentication although on the web server itself, the registry itself is still insecure using this method but as that would normally run on a seperate machine to the webserver with all public facing traffic having to go via the web server to reach it that is not so much of an issue. Also notice that the example does not really make sense (htpasswd is run in a httpd docker container and the groups created outside the container), but it does at least indicate that all the auth is done by apache in the apache container and not the by the registry container.

One note on the Apache authentication proxy method linked to above is that the document lists as a drawback that the TLS certs must be moved to the webserver as the TLS endpoint instead of on the registry, and the proxy is to the registry at http://servername:5000/v2. Yes, it does mean you must leave your registry container configured as discussed above in that no certificates are configured on the registry itself, but you no longer need to configure the “insecure-hosts” entry on your clients if they pull via the proxy as they will now get https reposnses (provided by the web server).

Also if you already have a public facing Apache webserver with valid certificates an apache proxy setup may be the way to go as yiu do not need to obtain additional certificates. The issues and benefits with the apache proxy approach are

  • issue:if you have not used ‘docker login’ to login to your proxy a ‘push’ request results in ‘no basic auth credentials’, however a ‘pull’ request returns the web servers 400 error page (with ‘require valid user’ for GET even if the option to not proxy apache httpd error pages is commented). However if you do not restrict GET access that is not an issue
  • benefit: after using ‘docker login’ users can push/pull as expected
  • benefit: configuring the GET method rule to “all granted” rather than “valid user” allows any user to ‘pull’ from your registry, which would be the only point of exposing it to the internet. ‘push’ and delete requests are still denied via the proxy if the user has not isued a ‘docker login’ to you local registry
  • issue: the mapping must be to the path /v2 as that is what the docker command requires; an open invite to hackers ?. Exactly what can be done with registry GET requests, are any destructive, thats an unknown
  • benefit: you are not required to obtain any new ssl certificates, if your website already has working certificates you can configure the same certificates your website already uses in the virtual host entry for the registry

Using the apache proxy with “require all” for GET and leaving the other http options unchanged results in all ‘push’ requests being denied unless a user in the ‘pusher’ group you defined has used ‘docker login’ while all ‘pull’ requests are being permitted; which is probably want you want if you expose it to the internet for end users to pull images from.

[root@gitlab docker]# docker tag busybox mywebproxy.myexample.org:5043/busybox
[root@gitlab docker]# docker push mywebproxy.myexample.org:5043/busybox
The push refers to repository [mywebproxy.myexample.org:5043/busybox]
514c3a3e64d4: Preparing 
unauthorized: authentication required

[root@gitlab docker]# docker pull mywebproxy.myexample.org:5043/mvs38j:f30
f30: Pulling from mvs38j
33fd06a469a7: Pull complete 
Digest: sha256:46eb3fb42e4c6ffbd98215ea5d008afc6f19be80d5c1b331f7ba23e07c9d8e46
Status: Downloaded newer image for mywebproxy.myexample.org:5043/mvs38j:f30
mywebproxy.myexample.org:5043/mvs38j:f30

While configuring users in a htpasswd and group file on an apache server providing a proxy service can seperate users that can only use GET operations and those that can perform all operations by requiring them to ‘docker login’ via the proxy, if you do intend to allow external users to pull images from your repository my recomendation would be to allow all users to ‘pull’ and no users to ‘push’ (simply have no users in the pusher group) via a public facing proxy configuration. Any ‘push’ processing should only be done from the trusted internal network anyway.

This is my replacement for the script provided on the docker site to create a working  Apache proxy configuraion on an existing Apache web server.

DOCKER_REGISTRY_HOST="docker-local" # hostname running the registry container, may be localhost if running it there
DOCKER_REGISTRY_PORT="5000" # port name used by the insecure container
APACHE_DOCKER_AUTH_DIR="/var/www/html/registry-auth" # directory to use for proxy vhost htpasswd and group data files
USERNAME_PUSH_AUTH="mark" # user to demo push access
USERNAME_PUSH_PASSWORD="pusherpwd" # password for above
SSL_CERT_PATH="/etc/letsencrypt/live/mywebserver.myexample.org" # where are the certs used by the existing website

# Did we have a valid cert directory ?, change nothing if not
if [ ! -d ${SSL_CERT_PATH} ];
then
echo "${SSL_CERT_PATH} is not a directory"
exit 1
fi

# Ensure the directories exist, create the needed files
if [ ! -d ${APACHE_DOCKER_AUTH_DIR} ];
then
mkdir -p ${APACHE_DOCKER_AUTH_DIR}
chown apache:apache ${APACHE_DOCKER_AUTH_DIR}
done
htpasswd -Bbn ${USERNAME_PUSH_AUTH} ${USERNAME_PUSH_PASSWORD} >> ${APACHE_DOCKER_AUTH_DIR}/httpd.htpasswd
echo "pusher: ${USERNAME_PUSH_AUTH}" >> ${APACHE_DOCKER_AUTH_DIR}/httpd.groups"
chown apache:apache ${APACHE_DOCKER_AUTH_DIR}/httpd.htpasswd
chown apache:apache ${APACHE_DOCKER_AUTH_DIR}/httpd.groups"

# Create the proxy configuration file
cat << EOF > /etc/httpd/conf.d/docker-registry.conf
LoadModule headers_module modules/mod_headers.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule access_compat_module modules/mod_access_compat.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule unixd_module modules/mod_unixd.so

Listen 5043
<VirtualHost *:5043>
ServerName mywebproxy.myexample.org
SSLEngine on
SSLCertificateFile ${SSL_CERT_PATH}/fullchain.pem
SSLCertificateKeyFile ${SSL_CERT_PATH}/privkey.pem
SSLCertificateChainFile ${SSL_CERT_PATH}/fullchain.pem
SSLCompression off
SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLHonorCipherOrder on
Header always set "Docker-Distribution-Api-Version" "registry/2.0"
Header onsuccess set "Docker-Distribution-Api-Version" "registry/2.0"
RequestHeader set X-Forwarded-Proto "https"
ProxyRequests off
ProxyPreserveHost on
# no proxy for /error/ (Apache HTTPd errors messages)
ProxyPass /error/ !
ProxyPass /v2 http://${DOCKER_REGISTRY_HOST}:${DOCKER_REGISTRY_PORT}/v2
ProxyPassReverse /v2 http://${DOCKER_REGISTRY_HOST}:${DOCKER_REGISTRY_PORT}/v2
<Location /v2>
Order deny,allow
Allow from all
# MUST match realm to the 'basic-realm' used by default in the registry container
# If you change the realm value used by the registry container change this also
AuthName "basic-realm"
AuthType basic
AuthUserFile "${APACHE_DOCKER_AUTH_DIR}/httpd.htpasswd"
AuthGroupFile "${APACHE_DOCKER_AUTH_DIR}/httpd.groups"
# Read access to any users
<Limit GET HEAD>
Require all granted
</Limit>
# Write access to docker-deployer only
<Limit POST PUT DELETE PATCH>
Require group pusher
</Limit>
</Location>
</VirtualHost>
EOF

chown apache:apache /etc/httpd/conf.d/docker-registry.conf

# Implement !
# The below assumes that your apache configuration uses the default of loading all .conf files
# in the conf.d directory; if you selectivy load files from that directory ensure you add
# the new conf file created by this script.
systemctl restart httpd

# What you MUST do
# your webserver host must accept traffic on port 5043
# your webserver mut be able to pass traffic to port 5000 on the remote (or local) registry host
# Then it should all just work with docker images tagged mywebproxy.myexample.org:5043/imagename[:tag]

Have fun.

About mark

At work, been working on Tandems for around 30yrs (programming + sysadmin), plus AIX and Solaris sysadmin also thrown in during the last 20yrs; also about 5yrs on MVS (mainly operations and automation but also smp/e work). At home I have been using linux for decades. Programming background is commercially in TAL/COBOL/SCOBOL/C(Tandem); 370 assembler(MVS); C, perl and shell scripting in *nix; and Microsoft Macro Assembler(windows).
This entry was posted in Unix. Bookmark the permalink.