Installing OpenStack Ussuri in a multicompute node configuration in your home lab

This post covers installing from the RDO project documentation that can be found at https://www.rdoproject.org, it uses the ‘packstack’ tool covered at https://www.rdoproject.org/install/packstack with the main changes in this post being defining additional compute nodes plus installing heat support as I use heat patterns a lot.

It also covers implementing connectivity from your OpenStack private networks to your existing network; in the correct order as the document on the RDO site to achieve this does not mention the need to install OpenVswitch and has editing the network scripts in the wrong order for me; I prefer that the bridge be configured and working before trying to install OpenStack.

This post does not cover a high-availablity installation, it is for a simple home lab where having a single network/control node is adequate and you have a few servers lying around with enough resources to be additional compute nodes (whether physical or with capacity to run KVM servers to be compute nodes).

While the post covers adding additional compute nodes it you wish to use the ‘allinone’ environment simply omit adding additional ip-addresses to the list of compute nodes when editing the answers file.

The last release of OpenStack from the RDO project for CentOS7 is the “Train” release. To obtain the latest “Ussuri” release you must be running CentOS8. This post covers using CentOS8 as we all want the latest release of course; I used CentOS8 media for 8.0.1905 installation.

If you follow all the steps in this post you will end uo with a working OpenStack environment with full external network connectivity

Creating the installation environment, changes needed on the network/controller server

Install two (2) or more new CentOS8 servers, I used KVM with the first server to be used as the network/control node with 15Gb memory and with a 70Gb disk, and a compute node and the second server a dedicated compute node with 10Gb memory with 50Gb disk. These servers must have static ip-addresses in your network range. I used 192.168.1.172 for the network/control node and 192.168.1.162 for the dedicated compute node. Memory allocations for dedicated compute nodes depend on what you intend to run of course. It is important to note these new servers should be assigned fully qualified hosts names.

Note: if you want more than one dedicated compute node create additional CentOS8 servers with static ip-addresses at this time also. However ensure you do not go mad and over-allocate compute nodes simply because you have spare capacity now; you may want to use that spare capacity for something else later and it is a lot harder to remove a compute node than to add an additional one so only define those you need initially, and add more later if needed in preference to trying to unconfigure unused compute nodes later on.

It should also be noted that CentOS8 does not provide the network-scripts package by default as it is depreciated. However it is a requirement that this is used rather than NetworkManager to configure your static-ip setup; as the scripts will need to be edited to setup bridging (on the assumption you want network access for your openstack environment).
eithert
A note on disk sizing. 50Gb of virtual disk size should be more than enough for both servers if you do not intend to create permanent volumes or create snapshots, however if you do wish to do either of those it is important to note that by default SWIFT storage is on a loopback device on your filesystem. The maximum size of this can be set when editing the answers file discussed later in this post but you need to reserve enough space on your network/control node to cope with all the persistant storage you are likely to use.

I should also point out at this time that I place a compute node on the network/control node spcifically to run a single small ‘gateway’ server instance as a bridge between the openstack private network and my normal external network (as if the network/control server is down there is no point havingh it anywhere else and placing it there eliminates network issues in reaching it) after which I disable the hypervisor for the network/control node to force all new instance creations to only be created on dedicated compute nodes. You may wish to not place any compute node functions on your network/control node which is probably the recomended method.

Once the new servers have been created you must update the /etc/hosts files on all the new servers (or dns servers if you use those) to ensure they are able to resolve each other by fully qualified server names, if they are not able to resolve each other installation will fail half way through leaving a large mess to clean up, to the point it is easier to start again from scratch, so ensore they can resolve each other. Also at this time on all servers perform the below steps.

dnf install network-scripts -y
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network

I prefer using OVS as the backend for the network node rather than the default of OVN, however I was unable to get this release of OpenStack networking working using OVS so this post covers installing it to use OVN which is fine for small home labs but does not support the VPNaaS or FWaaS services and uses Geneve as the encapsulation method for tenant networks.

OpenVswitch should be installed and configured before trying to install OpenStack to ensure the bridge is working correctly.

With all the notes above, only on the network/controller node perform the following steps

dnf update -y
dnf config-manager --enable PowerTools
dnf install -y centos-release-openstack-ussuri
dnf update -y
dnf install -y openvswitch
systemctl enable openvswitch

Then you need to edit some files in /etc/sysconfig/network-scripts, the initial filename will change based on your installation but for this example we will use mine which is ifcfg-ens3. Copy (not mode, copy) the file to ifcfg-br-ex ( ‘cp -p ifcfg-ens3 ifcfg-br-ex’ ); then edit the ifcfg-br-ex file to have the following changes.

  • The TYPE becomes TYPE=”OVSBridge”
  • The DEVICE becomes DEVICE=”br-ex”
  • The NAME becomes NAME=”br-ex”
  • Change BOOTPROTO from none to BOOTPROTO=”static”
  • Add a line DEVICETYPE=ovs
  • Add a line USERCTL=yes
  • Add a line DOMAIN=”xxx.xxx.xxx” where your domain is used, for example if your servername is myserver.mydept.example.com use mydept.example.com here
  • Delete the UUID line, that belongs to ens3 not br-ex
  • If a HWADDR line exists delete that also as it also referes to ens3 (a fresh install of CentOS8 does not use HWADDR)
  • All remaining parameters (ipaddr, dns, gateway etc remain unchanged)

Now you need to edit the origional ifgfg-xxx file, in my case edit ifcfg-ens3. This is an exercise in deletion, with only a few edits other than deleting lines, so it is easier to show an example. The below is what the ifcfg-ens3 file looks like after editing. Note that the HWADDR can be obtained from the ‘ether’ field of an ‘ifconfig ens3’ display and the UUID value will have been populated in the origional file during the install (either UUID or HWADDR can be used but I prefer to code both).

DEVICE="ens3"
BOOTPROTO="none"
TYPE="OVSPort"
OVS_BRIDGE="br-ex"
ONBOOT="yes"
DEVICETYPE=ovs
HWADDR=52:54:00:38:EF:48
UUID="6e63b414-3c7c-47f2-b57c-5e29ff3038cd"

The result of these changes is that the server ip-address is going to be moved from the network interface itself to the bridge device br-ex and managed by openvswitch after the server is rebooted.

One final step, update /etc/hostname to contain the fully qualified name of your server.

Then reboot the network/controller node. When it restarts ‘ifconfig -a’ (or ‘ip a’) must show that the ip-address has been moved to the new br-ex device.

If all has gone well the server is configured correctly for the install to begin on this server once the compute nodes have been prepared.

Creating the installation environment, changes needed on the compute nodes servers

After all the work above you will be pleased to see there is very little effort required here. Simply perform the steps below on every compute node to ensure that when the deployment needs packages installed the repositories needed are configured. Also ensure you followed the steps for all servers to switch from using NetworkManager to network and disable firewalld as was mentioned earlier.

dnf update -y
dnf config-manager --enable PowerTools
dnf install -y centos-release-openstack-ussuri
dnf update -y

Also update /etc/hostname to ensure it is set to the correct FQDN for the server, and remember to ensure the /etc/hosts file (or dns servers) have been updated to be able to resolve every nee server you have created for this envoronment.

Backup all your Virtual machine disks

At this stage you have a environment ready to install the RDO packaging of OpenStack onto.

You would not want to have to repeat all the steps again so shutdown the VMs and backup the virtual disk images. This will allow you to restart from this point as needed.

Once the virtual disk images have been backed up restart the VMs and continue.

Preparing the installation configuration settings, on the control node

Packstack by default will build a stand-alone single all-in-one environment with the optional features it thinks you may need. We wish to override this to support our additional compute nodes and add any other optional features you may wish to play with.

To achieve this rather than simply running packstack with the ‘–allinone’ option we will use the option ‘–gen-answer-file=filename’ packstack option to generate an answers file that we can edit to suit the desired configuration and feature installs.

Note the br-ex mapping to ens3 which was my interface, change ens3 to your interface name. Also note that as mentioned above we are using OVN networking.

dnf install -y openstack-packstack
packstack --allinone --provision-demo=n \
   --os-neutron-ovn-bridge-mappings=extnet:br-ex \
   --os-neutron-ovn-bridge-interfaces=br-ex:ens3 \
   --gen-answer-file=answers.txt \
   --default-password=password

In the example above the answers file is written to answers.txt; we need to edit this file to customise for the environment we wish to build.

You must search for the entry CONFIG_COMPUTE_HOSTS and update the entry with a comma seperated list of all the compute node server ip-addresses you wish to become compute nodes. In my case as the default is 192.168.1.172 (the network/control node packstack was run on) I just added the second ip-address also.

Other entries to note are the entries CONFIG_CINDER_VOLUMES_SIZE which defaults to 20G and CONFIG_SWIFT_STORAGE_SIZE which defaults to 2G. This amount of space will be needed to be available on your virtual disk filesystem. The first CONFIG_CINDER_STORAGE used space under /var/lib/cinder and must be large enough to contain all the block storage devices (disks) for all instances you will launch so if you will be running instances with large disks you will probably want to increase that. The second is for a loopback filesystem for swift object storage and I have had no problems with leaving that at 2G. Note however I do not use snapshots and seldom create additional volumes for instances, if you intend to do so definately increase the value of the first, as I am using a virtual disk size of 70G on the network/contro,l node and 50G on the compute nodes I set both to 20G for my use.

As a general rule options in the file set to “y” should be left that way but the other options in the file you can change from “n” to “y” to suit your needs, for example I use heat patterns so set CONFIG_HEAT_INSTALL=y (and CONFIG_HEAT_DB_PW, CONFIG_HEAT_KS_PW, CONFIG_HEAT_DOMAIN_PASSWORD, CONFIG_MAGNUM_DB_PW, CONFIG_MAGNUM_KS_PW set), likewise I set CONFIG_MAGNUM_INSTALL=y for container infrastructure support.

Setting entries turned on by default to “n” cannot be guaranteed to have been tested but will generally work.

This will run KVMs ‘hot’ (lm-sensors shows my kvm host cpu temperatures go from 35 to 85 (with 80 being the warning level on my cores)) so I did for my use set CONFIG_CEILOMETER_INSTALL and CONFIG_AODH_INSTALL to “n” as I don’t need performance metrics, and that alone dropped the temperatore of the cores by 10 degrees.

Interestingly when I gave up on OVS networking and switched to OVN networking temperatures dropped another 10degrees so OVN networking is preferred. But expect a lot of spinning cooling fan noise anyway.

When you have customised the answers file to suit your needs you are ready to perform the install.

Performing the OpenStack install

You have done all the hard work now, to install OpenStack simply run packstack using the answers file ensuring all your new servers can resolve ip-addresses to hostnames via /etc/hosts or DNS and that all the servers are available, you may want to ssh from the network/control node to each of the compute node servers to ensure connectivity before running the packstack command, and recheck name resolution.

Remember that the commands I have used are for an openswitch OVN environment so ensure you performed the steps to create the br-ex bridge covered earlier also.

To perform the instal simply run packstack using your customised answers file as below. You will be prompted to enter the root password for each of the compute you have configured. You may need to use the –timeout option if you have a slow network; not just a slow internal network but internet as well as many packages will be downloaded. It will probably take well over an hour to install.

packstack --answer-file=answers.txt [--timeout=600000]

When it states the installation has been sucessful define a router using a flat network to your local network, then create a floating ip range that can be used by openstack, ensure the floating ip range is not used by any of your existing devices.

Note that to issue any commands you need to source the keystonerc_admin file that will have been created in the root users home directory to load credentials needed to issue configuration commands.

These commands can be enteres as soon as the ‘packstack’ command has completed sucessfully as it starts all the services required as part of the installation. Change the network addresses to the addresses used by your external network and ensure the allocation pool range does not conflict with any addresses published by your dhcp server or home router).

source keystonerc_admin
neutron net-create external_network \
  --provider:network_type flat \
  --provider:physical_network extnet \
  --router:external
neutron subnet-create --name public_subnet \
  --enable_dhcp=False \
  --allocation-pool=start=192.168.1.240,end=192.168.1.250 \
  --gateway=192.168.1.1 external_network 192.168.1.0/24

Once this step has completed ‘tenant’ private networks can be created and configured to use the router for external network access, and floating up-addresses can be assigned to servers within the tenant private network to allow servers on the external network to connect to servers on the private network. What I would normally do is start only one instance with a floating ip-address per private network and simply add a route to my desktop to access the private network via that gateway (ie: if the gateway was assigned floating ip 192.168.1.243 and the private network within the OpenStack tenant environment was 10.0.1.0/24 I would simply “route add -net 10.0.1.0/24 gw 192.168.1.243” and be able to ssh directly to any other instances in the 10.0.1.0/24 network via that route).

If you have reviewed the documentation on the RDO website you will have seen that a project is created and a user assigned to the project, router and tenant private network created from the command line interface. I personally prefer doing that through the horizon dashboard after installation to ensure everything looks like it is working correctly; to do so logon as admin and create a project, then create a personal user user and assign it to that project in an admin role.

To access the horizon interface simply point your webbrowser at “http://the address of your controller node/dashboard”.

After creating the project and user logoff the dashboard and logon again as the new user.

  • Go to Network/Networks and create a new private network for your project, example marks_10_1_1_0, subnet marks_10_0_1_0_subnet1 with 10.0.1.0/24 (let gateway default), subnet details would be 10.0.1.5,10.0.1.250 (do not use the first few entries as they used to be reserved, ie: 10.0.1.1 used to be assigned to the gateway so do not permit it in the allocation pool)
  • Then create a router for your project using Network/Routers for example marks_project_router and assign it to the predefined ‘external_interface’ we created from the command line earlier; then from the router list select the new router and the interfaces tab and from there attach your projects private network to the router also. Instances in your project will now have external connectivity and be able to be assigned a loating ip-address from the public_network allocation range defined from the command line earlier when the public subnet was created
  • This would also be a good time to use Network/Security Groups to create a security group to use for testing such as a group named allow_all, a new group defaults to allow all for egress but we want to also add rules ingress tcpv4 ALLTCP, ingress ALLICMP (to allow ping) and egress ALLICMP so we can test connectivity to everything and also use tools like ping for troubleshooting. Obviously when you are happy an isntance is working you would want to create custom security group rules only permitting the access required but rules can be added/deleted on the fly and multiple groups can be used so a server may contain for example a rule for http and mariadb rather than needing a combined rule with both
  • before logging out and still as your own userid go to Project/Compute/Key Pairs and create a ssh key for your userid, you will need it to ssh into instances you launch

At this time you would also want to create a ‘rc’ file for your new user, ‘cp -p keystonerc_admin keystonerc_username’ end edit the new file to contain your new username and password and set the project to your personal project; this is the rc file you will source when working from the command line for your project instead of using the admin project.

This is a good time to look around, from your project signon look at Admin/Compute/Flavors (if you remembered to make your personal userid an admin role); you will see that the default flavors are too large for most home lab use, you will use this location to define custom flavours useful to your environment as you load images to use for your servers. You wil also notice that under the Images selection there are no images available which is correct, we have not yet adeded any.

Also check the Hypervisors tab to make sure all the compute nodes you defined in the answers file have been correctly setup. Testing the compute nodes is covered later in this post.

Obtaining cloud images to use

To launch an instance you need a cloud image and a flavour that supports the image. Fortunately many distributions provide cloud images that can be used in OpenStack, for example the Fedora33 one can be found at https://alt.fedoraproject.org/cloud/. There are also CentOS cloud images available at https://cloud.centos.org/centos/8/.

It is important to note that not all operating systems provide cloud images, and some operating systems simply will not run in an OpenStack environment; an example of one such is OpenIndianna which runs fine in a KVM but will not run properly under OpenStack.

Once you have a cloud image downloaded onto your network/control node you need to load it into OpenStack, to do so you need to define a few minimum values needed for the image. Using the F33 image as an example the cloud image disk size can be obtained from qemu-img as below.

[root@vmhost3 Downloads]# qemu-img info Fedora-Cloud-Base-33-1.2.x86_64.qcow2
image: Fedora-Cloud-Base-33-1.2.x86_64.qcow2
file format: qcow2
virtual size: 4 GiB (4294967296 bytes)
disk size: 270 MiB
cluster_size: 65536
Format specific information:
    compat: 0.10
    refcount bits: 16
[root@vmhost3 Downloads]#

That shows the minimum virtual disk size that can be allocated is 4Gb, it is important to do this step as disk sizes change, for example CentOS7 cloud image needs a minimum of a 10G virtual disk. It is up to you to decide what the miminum memory requirements should be, to load the F33 image you would use commands such as the below (making it public allows all tenants to use it, if omitted only the project owner can use it; you would create a keystomerc_username for each of your own project/user environments if you wanted the image private to your own environment).

source keystonerc_admin     # note you can use your own projects rc file here
glance image-create \
 --name "Fedora33" \
 --visibility public \
 --disk-format qcow2 \
 --min-ram 512 \
 --min-disk 4 \
 --container-format bare \
 --protected False \
 --progress \
 --file Fedora-Cloud-Base-33-1.2.x86_64.qcow2

Once the image is loaded you would use the horizon dashboard to create a new flavour that supports the new image; rather than use the default flavours which will be too large. The minimum disk size is important and you must code it to avoid issues; while you could load the image with no min-disk-size if you then tried to launch an instance with this disk with a 2Gb disk size flavor it would obviously crash.

Also note that the cloud-init scripts will resize the image upward automatically if a larger disk size is used in a flavor so you can load the image with a min-disk of 4 and use a flavor with a disk size of 10G quite happily and end up with a 10G disk in your instance.

You should load all the openStack cloud images you are likely to need to use at this time, I would generally have a few versions of Fedora and CentOS. Create flavors for the images also.

Testing your installation

If you had a look around the dashboard as suggested earlier you will have found the hypervisors display, from the compute nodes tab of that screen you are able to individually disable compute nodes.

What you should do as a final verification step is disable all but one compute node at a time an launch an instance with a security group rule that allows ICMP so you can ping it to ensure the instance starts correctly on each individually active compute node and that you have console connectivity to the instance via the daskboard instances screen. Some earlier releases required manual correction of configuration files on compute nodes to obtain console access but that has been fixed in this release, but still needs to be tested.

From the instances screen you can select each instance and the details will show you which compute node an instance has been configured on.

You should assign a floating ip-address to one of the instances and ensure you can ssh into the instance on the floating ip-address assigned using the key-pair you created. Note that the userid to use for each instance differes, for Fedora cloud images the user will be fedora, likewise use the userid centos for CentOS cloud images. That will verify inward connectivity. Use syntax suxh as “ssh -i identity_file.pem fedora@floating-ip-address”.

From that ssh session to your server you should also from that instance connection ping the instances you started on other compute nodes to test private network connectivity; you cannot ssh into them unless you copy your key to the instance you just logged onto. Note that if in the user configuration section when you launched your instance you set the root password you could logon directly from a console session to do the ping tests.

You should also ping one of the servers in your external network to ensure outward connectivity, and an internet based server also to ensure routing to the big wide world works as you will probably want to be installing software from repositories onto your instances.

Once you are happy all compute nodes are working correctly you can re-enable all the compute nodes; delete your test instances and start using it.

Additional Tips and notes

  • The Fedora33 image load example used in this post had a minimum memory allocation of 512; dnf will be killed by the early out-of-memory killer with this allocation but then run OK with over 100K free, if you want to use dnf allocate at least 756 in your flavour
  • In the user custom script section always have the first line as “#!/bin/bash” or commands will not be executed but produce an invalid multipart/mime error from cloud-init; to enable easy troubleshooting I normally have in that section
    #!/bin/bash
    echo "password" | passwd root --stdin
    
  • you do not need many floating ip-addresses, for each private network I only assign one to a server and use that server as a gateway/router into the private network from my external network
  • I recomend installing heat, using heat patterns to deploy a stack of servers is the easiest way to do so; some of my earlier posts have examples of using heat patterns to deploy multiserver test environments
  • remember to make your personal userid an admin role; this avoids having to repeatedly login as admin to enable/disable hypervisors and manage flavors
  • Also note that if you have a slow network between compute nodes the first instance deployment for an image may fail, as the image must be copied to the remote compute node before the instance launches and may timeout. Waiting for 5-10mins and trying again will work as the image will have completed transferring and not need to be copied again; although unused images will be cleaned up by timers if it remains unused on the compute node for too long

About mark

At work, been working on Tandems for around 30yrs (programming + sysadmin), plus AIX and Solaris sysadmin also thrown in during the last 20yrs; also about 5yrs on MVS (mainly operations and automation but also smp/e work). At home I have been using linux for decades. Programming background is commercially in TAL/COBOL/SCOBOL/C(Tandem); 370 assembler(MVS); C, perl and shell scripting in *nix; and Microsoft Macro Assembler(windows).
This entry was posted in OpenStack, Virtual Machines. Bookmark the permalink.