OpenStack Ocata install from the RDO repositories

Why a fresh install instead of an upgrade ?

I gave up on trying to upgrade from Newton to Ocata using the RDO repositories.

The upgrade instructions have now been updated for the Ocata release in the RDO documentation web pages but it is anybodies guess on what order they need to be performed in. While they have a special section on upgrading nova to use the placement service it doesn’t work in the order on the webpage, some of the commands need openstack running but the upgrade steps are after openstack is shutdown. I tried to muddle through them and ended up with a unusable system again.

Interestingly the placement database on a fresh install is nova_cell0. The database upgrade scripts still want a nova_api_cell0. Which is only interesting who read my posts on my attempts to upgrade where the available documentation and upgrade scripts had completely different ideas about what databases were needed.

Anyway, while upgrade from mitaka to newton was easy, from newton to ocata (after weeks of trying) has been put in the waste of time basket.

Anyway, a complete fresh install to Ocata

Now that Ocata is available as a simple packstack install I decided to install my environment from scratch (after using glance image-download to save all my images of course).

I created two new CentOS7 VMs using the minimum install option, and the only issues encountered in preparing the environment were

  • openvswitch is not in the CentOS7 repositories, I had to add the RDO repository and install openvswitch before I could create the br-ex interface needed for using openvswitch; but done

Use packstack to create an answers file to edit, only changed

  • set the admin password to password :-)
  • set the use heat flag to “y” as I need heat
  • added my second VM as a second compute host

The packstack install file using the answers file failed on rabbitmq starting; did a –allinone packstack install, same error. A quick search on google and this is expected… you must add the server names into the /etc/hosts files on both the servers if they are not in dns as the rabbitmq install wants to lookup the ip-addresses (I didn’t find this in the RDO site, just posts from others who had the same issue).

After adding the server names into the /etc/hosts files running packstack created the environment seemingly correctly, I could logon to the dashboard and had two compute nodes under the hypervisor tabs.

The remaining issues after it apparently installed correctly the I had to resolve were

  • the cirros image installed by default fails to install, stays in queueing state, it had to be deleted from the CLI as the dashboard could not do it (google searches show this hiccup in the install is common). I only noticed it because I forgot to change “y” to “n” to download it as I didn’t want it anyway :-)
  • metadata service is unreachable from instances, another known issue I have hit before with a packstack install. On all compute hosts edit /etc/neutron/dhcpagent.ini and set “force_metadata = true”. Failure to do so will prevent ssh keys and configuration scripts (well all metatdata) from being applied to instances… meaning you have no chance of logging onto them
  • console access only works to the controller node, console access to instances on additional compute nodes cannot connect. To resolve on all additional compute nodes edit nova.conf and
    • in the [vnc] section add the missing line “vncproxy_url = http://nnn.nnn.nnn.nnn:6080” where nnn.nnn.nnn.nnn is replaced by the ip-address of your controller node
    • check novncproxy_base_url=http://nnn.nnn.nnn.nnn:6080/vnc_auto.html
    • check vncserver_listen is set to the ip-address of your compute node and not set to 0.0.0.0
    • and for the vncserver_proxyclient_address use the ip-address of the compute node, not the FQDN

    and you then have dashboard console access to the additional compute nodes

  • unable to get external networking configured using a flat network, the neutron l3-agent.log showed it was trying to create an entry that already existed. Resolution: delete all the demo networks that were provisioned and then adding the external flat network (so assumption is only one flat network permitted per bridge ?) and we are in business
  • And damn security rules. With tcpv4 ICMP allow all inbound and outbound, tcpv4 TCP allow all outbound, tcpv4 port 22 allowed inbound (all normal you would think) an instance cannot ping anything at all by DNS name.
    The only resolution I found for this is when creating a custom security group do not delete the default added egress ipv6 any and ipv4 any, as deleting them and adding back in an ipv4 egress for all port ranges does stop dns lookups (and probably breaks a lot more things as well). So leave the defaults and just add ingress rules as needed.

And finally after a few days of trawling through logs and living on google I have an Ocata system that… well honestly as a user I cannot see much difference in usage since Newton, which I guess is a good thing as a user.
And being on the latest means with luck I may be able to upgrade to the next release when it is available :-).

About mark

At work, been working on Tandems for around 30yrs (programming + sysadmin), plus AIX and Solaris sysadmin also thrown in during the last 20yrs; also about 5yrs on MVS (mainly operations and automation but also smp/e work). At home I have been using linux for decades. Programming background is commercially in TAL/COBOL/SCOBOL/C(Tandem); 370 assembler(MVS); C, perl and shell scripting in *nix; and Microsoft Macro Assembler(windows).
This entry was posted in OpenStack, Unix. Bookmark the permalink.