OpenStack experiences so far

I have been getting interested in openstack.

I don’t have the hardware to run a full openstack environment, so I have been using the devstack development environment (available from in a Ubuntu virtual machine.

Sooo… memory-wise I have a Fedora 18 6Gb desktop, running a 4Gb virsh managed Ubuntu system with devstack installed, and within the devstack/openstack environment I am running a 2Gb Fedora-Core-18 webserver with mysql and hercules running on the webserver also. And supprisingly the webserver under openstack still responds very quickly. Guess those sizes are OK as there is no swapping going on either.

What I have done for the openstack virtual machine so far is

  1. manually built a bare F18 disk image on a 5Gb disk using the F18 install media, converted it to qcow2 and stored it as a public image in openstack as an OVF image
  2. created an instance using it with a flavor using a 20Gb disk
  3. manually repartitioned to use the full 20Gb (see notes below on why)
  4. tested my configuration script to pull down ipackages and backups to build an exact copy of my live webserver, working perfectly now (mysql starts with up-to-date databases, http starts, hercules starts with up-to-date disks etc.)

Notes: The issue of manually partitioning in step3 is because I am using a OVF full disk image, Fedora cloud-init doesn’t repartition the disk so it has to be done manually.
Theoretically if my disk image was an AMI image cloud-init would correctly autoresize it but under devstack at least it is possible to add ARI and AKI images, but AMI is no longer supported, not under devstack anyway. The help for “glance image-create” says AMI is valid but the –kernel_id and –ramdisk_id parameters are no longer supported so not only could the the AMI image not be linked to something that can actually boot it, but the AMI option itself gives an error that it is not supported.
But I have a perfectly valid AMI ext4 filesystem (plus ramfs and kernel) I am still playing with, maybe I will get there. The uec2 utilities aren’t very helpfull here either, I don’t want to publish a tarball ??.

The documentation available isn’t too good ether, there is a lot of documentation; although to be precise I should say there is a lot of conflicting documentation. Even on the openstack site itself I can find examples of how to do what I want, that just no longer work; the documentation is not keeping space with changes in the environment which is frustrating and results in entire days being wasted following those examples.

But despite my issues notes above, rackspace currently does not allow users to upload and use their own images yet anyway, so no real hurry for an AMI other than I want to skip the manual partitioning step.

The good news

But the upshot is that now I have scripted everything I can blow away an openstack instance and get it reinstalled to a working state that mirrors exactly my live webserver from scratch with about 5mins work (human work, excludes all the package download/install/customise times as they are hands off now so don’t count).

The bad news

My only issue now is nowhere on the rackspace pages can I find a list of what linux OS’s rackspace provide public images for, my testing has been done on F18, and I know F18 is so different from F17 I can only use F18… I know because of the 100’s of changes I had to make to get my home website kickstart scripts updated from F17 to F18, they are really different, especially in the apache configuration and selinux rule spaces. Without knowing what OS’s they provide why sign up ?.

But I did start the signup process… but their terms and conditions during signup are in a really tiny scrollable window on the second page of the signup process; cancelled out of reading them for now as too much effort for now.


  • At home I have a live internet facing webserver that works just perfectly, that is installed/recovered by a kickstart install that begins with formatting the drives and recovery fully tested, so that cannot be destroyed as if physical hardware failure it can easily be redeployed to different physical hardware simply by adding a new MAC address to my dhcp config on the kickstart server (yes I also have a backup kickstart server on seperate physical hardware, normally powered off but it is kept up-to-date, feeling almost bullet-proof). Plus there is also a live mirrored/standby copy of the webserver in a VM on my development system that automatically takes over when the primary fails (tested by removing power to the primary :-), works perfectly). So I really don’t have to go cloud yet apart from the play potential.
  • Future, move it to the cloud, not because I need to but because I want to play with that stuff

    • The good
      • I like openstack, a bonus
      • I can play with openstack ay home using devstack
      • starting to get the hang of openstack
    • The bad
      • to learn how openstack works I really need my own server farm, obviously a hoster won’t give guests admin access to their cloud
      • rackspace hides their terms and conditions really well, so while I will probably end up with playing with a server in that space it will be a short term solution on one of those prepaid/max-limit cards; and shortlived as their is no configuration experience to be learned in a guest environment… I say that as at this time rackspace does not allow users to upload their own images so rackspace is not a openstack learning environment

I have no real reason to move my webserver into the cloud; other than as a ‘porting’ test, and maybe a cloud instance may give more uptime (but home:(multiple physical and VM instances=$0) cloud:(rackspace cheapest of all cloud providers I have found at about $45US per month single instance)).
And I would still have to backup the cloud image to my home servers anyway.
So still pondering the usefullness of using a cloud provider over continuing to play with devstack at home; might be better to buy the extra hardware and build an openstack environment… except the house doesn’t have enough power points.

About mark

At work, been working on Tandems for around 30yrs (programming + sysadmin), plus AIX and Solaris sysadmin also thrown in during the last 20yrs; also about 5yrs on MVS (mainly operations and automation but also smp/e work). At home I have been using linux for decades. Programming background is commercially in TAL/COBOL/SCOBOL/C(Tandem); 370 assembler(MVS); C, perl and shell scripting in *nix; and Microsoft Macro Assembler(windows).
This entry was posted in Virtual Machines. Bookmark the permalink.