This is for those of you who have installed the RDO release using the default loopback device for cinder volume storage and swift storage. It is a brief set of notes I will refer back to myself on where the space used is going to stop myself deleting files.
It is specific to the Newton release, while I am aware Octa is now available I have only just finished upgrading from Matika to Newton (which was a few very painfull months, the next jump can wait until I finish writing posts on the last).
The key thing is when you are running low on space do not delete any large files even if they appear not to be in use, they are probably important; as I discovered :-)
Another very important point is that this post is primarily for “all-in-one” RDO installed systems, as I will have to read a lot more documentation to find out what component(s)/servers should be managing these files. My setup is a “all-in-one” VM with a second compute node VM added, all (now, as far as I can tell) working 100% correctly; cinder and swift storage is not created on the second compute node.
Volume storage for cinder and swift are created by a minimal (or all in one) install as large disk files on the machine(s) the install is performed on. Instance disk storage in placed in the compute node(s) filesystem.
First a performance note for cinder volumes
When deleting a volume you will notice your system seems to come to a halt for a while. That is easily avoided but not recomended if you are using a VM(s) to run your test environment. The cause is an entry (the default) in cinder.conf ‘volume_clear = zero’ which basically means when you delete a volume it is overwritten with zeros presumably as a complience/security setting which obviously takes a long time with large volumes. The considerations for changing it are
- setting it to “volume_clear = none” will greatly speed up deltion of a cinder volume, usefull in a lab only environment running in a non-VM environment
- leaving it set to write zeros is still recomended for a lab environment running within a VM simply because if you compress your VM disk image occasionally having lots of zeros together is a good idea if you really want to compress the space used by your VM disk image
Swift storage
A simple RDO install creates the file /srv/loopback-device/swiftloopback as a local virtual disk, this is mounted as a loopback device as a normal loopback device via /etc/fstab on mountpoint /srv/node/swiftloopback. As the size of that file limits the amount of space you will have for swift storage you need to define it as a reasonable size when doing the RDO install.
[root@region1server1 ~(keystone_mark)]# ls -la /srv/loopback-device/swiftloopback -rw-r--r--. 1 root root 10737418240 Mar 20 00:20 /srv/loopback-device/swiftloopback [root@region1server1 ~(keystone_mark)]# file /srv/loopback-device/swiftloopback /srv/loopback-device/swiftloopback: Linux rev 1.0 ext4 filesystem data, UUID=e674ab98-5137-4895-a99d-ae92302fa035 (needs journal recovery) (extents) (64bit) (large files) (huge files)
As I do not use swift storage thats an empty filesystem for me… and as I have not (as far as I am aware) ever used it I’m supprised it needs journal recovery. A umount/e2fsck reported it clean/remount still shows journal recovery needed ?. It mounts ok anyway, on the not to worry about list for now as I don’t need it.
Cinder volume storage
Do not, as I did when trying to reclaim space, delete the file /var/lib/cinder/cinder-volumes. Unless of course you like rebuilding filesystems as that is a “disk”.
A simple RDO install creates the file /var/lib/cinder/cinder-volumes as a local virtual disk, a volume group is created and that file is added as a PV to the volume group. As the size of that file limits the amount of space you will have for volume storage you need to define it as a resonable size when doing the RDO install.
[root@region1server1 cinder]# file /var/lib/cinder/cinder-volumes /var/lib/cinder/cinder-volumes: LVM2 PV (Linux Logical Volume Manager), UUID: d76UV6-eKFo-LS5v-K0dU-Lvky-QTEy-kkKK0X, size: 22118662144
When a cinder volume is required it is created as a LV within that volume group. An example of that cinder-volumes file with two instances defined (requiring two volumes) is shown below
[root@region1server1 cinder]# ls -la /var/lib/cinder/cinder-volumes -rw-r-----. 1 root root 22118662144 Mar 19 23:54 /var/lib/cinder/cinder-volumes [root@region1server1 cinder]# file /var/lib/cinder/cinder-volumes /var/lib/cinder/cinder-volumes: LVM2 PV (Linux Logical Volume Manager), UUID: d76UV6-eKFo-LS5v-K0dU-Lvky-QTEy-kkKK0X, size: 22118662144 [root@region1server1 cinder]# vgdisplay --- Volume group --- VG Name cinder-volumes System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 14 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.60 GiB PE Size 4.00 MiB Total PE 5273 Alloc PE / Size 2304 / 9.00 GiB Free PE / Size 2969 / 11.60 GiB VG UUID 0zP0P3-V59L-CJBF-cbq2-2X5R-mNwd-wA4oO0 [root@region1server1 cinder]# pvdisplay --- Physical volume --- PV Name /dev/loop1 VG Name cinder-volumes PV Size 20.60 GiB / not usable 2.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5273 Free PE 2969 Allocated PE 2304 PV UUID d76UV6-eKFo-LS5v-K0dU-Lvky-QTEy-kkKK0X [root@region1server1 cinder]# lvdisplay --- Logical volume --- LV Path /dev/cinder-volumes/volume-3957e356-880a-4c27-b18e-9ca2d24cfaad LV Name volume-3957e356-880a-4c27-b18e-9ca2d24cfaad VG Name cinder-volumes LV UUID WH8p3K-Rujb-dl3p-QscE-BcxX-0dxA-qV0lz5 LV Write Access read/write LV Creation host, time region1server1.mdickinson.dyndns.org, 2017-02-07 17:11:38 -0500 LV Status available # open 1 LV Size 3.00 GiB Current LE 768 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 252:0 --- Logical volume --- LV Path /dev/cinder-volumes/volume-bfd6fe7f-d5a4-4720-a5fd-07acdd0e8ef0 LV Name volume-bfd6fe7f-d5a4-4720-a5fd-07acdd0e8ef0 VG Name cinder-volumes LV UUID dSuD5D-NuYR-4dMh-GJai-AZDr-idND-fDPPGa LV Write Access read/write LV Creation host, time region1server1.mdickinson.dyndns.org, 2017-03-14 00:13:19 -0400 LV Status available # open 1 LV Size 3.00 GiB Current LE 768 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 252:1 --- Logical volume --- LV Path /dev/cinder-volumes/volume-aecee04f-a1b2-4daa-bdf4-26da7c346495 LV Name volume-aecee04f-a1b2-4daa-bdf4-26da7c346495 VG Name cinder-volumes LV UUID pBMTT2-IC1d-FPI4-wxSG-hxcd-6mIT-ZHgHmj LV Write Access read/write LV Creation host, time region1server1.mdickinson.dyndns.org, 2017-03-14 00:51:49 -0400 LV Status available # open 1 LV Size 3.00 GiB Current LE 768 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 252:3 [root@region1server1 ~(keystone_mark)]# cinder list +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ | 3957e356-880a-4c27-b18e-9ca2d24cfaad | in-use | | 3 | iscsi | true | 07db7e7a-beef-46c6-8ca4-01331bb01a80 | | aecee04f-a1b2-4daa-bdf4-26da7c346495 | in-use | | 3 | iscsi | true | d99bf2a4-9d01-49a9-bb43-24389b73d711 | +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
Note one annoyance, the “cinder list” command only displays volumes assigned to the current tenant credentials, running it using the keystonerc_admin will return no volumes (unless you are using admin for all your projects of course). That is correct behaviour.
And another annoyance is that it seems the only way (I have found do far anyway) to check on freespace in the cinder storage volume is pvdisplay; if you have insufficient space to create a volume you will get a “block device mapping error” rather than a you have run out of space error.
And a word of caution, the defaults in Mitaka were to use image boot (not to create a volume) when launcing an instance, that has changed in Newton to defaulting to creating a boot volume for instances when they are launched. Unless you specifically want a volume created you should remember to change the use a volume flag when launching instances or you will soon run out of space.
Storage of images
Images are stored as individual files on the filesystem of the control/region node, they are not expected to be on the compute nodes. So when you use the “glance image-upload” or dashboard import of new image files they are stored as individual files on the main (control/region) server.
[root@region1server1 ~(keystone_mark)]# ls -la /var/lib/glance/images total 214516 drwxr-x---. 2 glance glance 4096 Mar 9 22:17 . drwxr-xr-x. 3 glance nobody 4096 Feb 4 19:52 .. -rw-r-----. 1 glance glance 206359552 Feb 5 16:19 152753e9-0fe2-4cc9-8ab5-a9a61173f4b9 -rw-r-----. 1 glance glance 13287936 Feb 4 19:53 bcd2c760-295d-498b-9262-6a83eb3b8bfe [root@region1server1 ~(keystone_mark)]# glance image-list +--------------------------------------+--------------------+ | ID | Name | +--------------------------------------+--------------------+ | bcd2c760-295d-498b-9262-6a83eb3b8bfe | cirros | | 152753e9-0fe2-4cc9-8ab5-a9a61173f4b9 | Fedora24-CloudBase | +--------------------------------------+--------------------+
When an instance is launched requiring one of the images a copy of the image will be magically copied to the compute node the instance it to be launched from if the instance is launched without a volume being created, if a volume is created the instance boots across the network using the volume created in the cinder service on the main control node, as discussed below.
Storage used by defined instances
Whether an instance is running or not it will be using storage.
if launched to use a volume as the boot source (Newton default)
If an instance is launced to use a boot volume as the boot source then the volume will be created in the cinder storage residing on the machine providing the cinder service, this is unlikely to be the compute node, meaning you require a fast network in order to access the volume remotely from the compute node. And this seems to be the default in Newton.
On the compute node an instance directory is created, but no local storage on the compute node is required as the disk image(s) are stored on the remote cinder storage server.
[root@compute2 ~]# ls -laR /var/lib/nova/instances /var/lib/nova/instances: total 24 drwxr-xr-x. 5 nova nova 4096 Mar 14 00:52 . drwxr-xr-x. 9 nova nova 4096 Mar 14 00:13 .. drwxr-xr-x. 2 nova nova 4096 Mar 14 23:30 _base -rw-r--r--. 1 nova nova 52 Mar 20 23:50 compute_nodes drwxr-xr-x. 2 nova nova 4096 Mar 14 00:52 d99bf2a4-9d01-49a9-bb43-24389b73d711 drwxr-xr-x. 2 nova nova 4096 Mar 14 23:30 locks /var/lib/nova/instances/_base: total 8 drwxr-xr-x. 2 nova nova 4096 Mar 14 23:30 . drwxr-xr-x. 5 nova nova 4096 Mar 14 00:52 .. /var/lib/nova/instances/d99bf2a4-9d01-49a9-bb43-24389b73d711: total 36 drwxr-xr-x. 2 nova nova 4096 Mar 14 00:52 . drwxr-xr-x. 5 nova nova 4096 Mar 14 00:52 .. -rw-r--r--. 1 root root 26313 Mar 15 01:15 console.log /var/lib/nova/instances/locks: total 8 drwxr-xr-x. 2 nova nova 4096 Mar 14 23:30 . drwxr-xr-x. 5 nova nova 4096 Mar 14 00:52 .. -rw-r--r--. 1 nova nova 0 Feb 4 20:57 nova-storage-registry-lock [root@compute2 ~]# [root@region1server1 ~(keystone_mark)]# nova show d99bf2a4-9d01-49a9-bb43-24389b73d711 +--------------------------------------+----------------------------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | compute2.mdickinson.dyndns.org | | OS-EXT-SRV-ATTR:hostname | test-compute2 | | OS-EXT-SRV-ATTR:hypervisor_hostname | compute2.mdickinson.dyndns.org | | OS-EXT-SRV-ATTR:instance_name | instance-00000012 | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-gwvib2vv | | OS-EXT-SRV-ATTR:root_device_name | /dev/vda | | OS-EXT-SRV-ATTR:user_data | - | | OS-EXT-STS:power_state | 4 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | stopped | | OS-SRV-USG:launched_at | 2017-03-14T04:52:10.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2017-03-14T04:51:40Z | | description | test-compute2 | | flavor | marks.tiny (2a10119c-b273-48d5-b727-57bda760a3d2) | | hostId | 5afd74a02f0332b995921841d985ba3ea39e9a077acbdb3d8a8e9a55 | | host_status | UP | | id | d99bf2a4-9d01-49a9-bb43-24389b73d711 | | image | Attempt to boot from volume - no image supplied | | key_name | marks-keypair | | locked | False | | metadata | {} | | name | test-compute2 | | os-extended-volumes:volumes_attached | [{"id": "aecee04f-a1b2-4daa-bdf4-26da7c346495", "delete_on_termination": false}] | | security_groups | default | | status | SHUTOFF | | tags | [] | | tenant-mark-10-0-3-0 network | 10.0.3.19 | | tenant_id | 325a12dcc6a7424aa1f96d63635c2913 | | updated | 2017-03-15T05:15:53Z | | user_id | f833549171d94242b3af5d341b9270de | +--------------------------------------+----------------------------------------------------------------------------------+
Snapshots for boot volumes are also written to cinder storage as LVM disks and show up with lvdisplay so additional cinder storage space is needed for those as well. Interestingly the “zero” flag mentioned at the start of this post does not appear to apply to snapshots stored in cider, they get deleted quickly.
They are displayed in the dashboard “volume snapshots” tab for the project; as well as being available as selectable images from the dashboard image list (which I do not think they should be as they need a virt-sysprep at least to be usable I would have thought).
While they are visable in the projects volume “snapshot” tab if the snapshot is deleted via the dashboard the image entry added for the snapshot remains; it must be seperately deleted from the dashboard images page.
if launched to use an instance as the boot source
In a home lab environment, with a not necessarily fast network and limited storage on the control (cinder) machine but lots of storage on compute nodes, this is preferable (to me anyway). The boot from image must be manually selected with creating a volume switch manually flicked off, the Newton default is to create a boot volume.
The required image is copied to the remote compute node the instance will be running on and placed in the /var/lib/nova/instances/_base directory on the compute node where it is used as a backing disk, the actual instance does not modify this image as it is just a backing disk, changes unique to the image are simple recorded in a directory for the image as normal for using qemu disks this way.
The major benefits are that instances should boot from local compute server storage, and if you are launching multiple instances using the same image then only that one backing file is needed as each instance booting from that image will only need disk space for its own changes. And of course there is no cinder storage used.
[root@compute2 ~]# ls -laR /var/lib/nova/instances /var/lib/nova/instances: total 24 drwxr-xr-x. 5 nova nova 4096 Mar 21 00:48 . drwxr-xr-x. 9 nova nova 4096 Mar 14 00:13 .. drwxr-xr-x. 2 nova nova 4096 Mar 21 00:37 4f0d7176-6573-415a-99e4-2846971679e4 drwxr-xr-x. 2 nova nova 4096 Mar 21 00:36 _base -rw-r--r--. 1 nova nova 53 Mar 21 00:30 compute_nodes drwxr-xr-x. 2 nova nova 4096 Mar 21 00:36 locks /var/lib/nova/instances/4f0d7176-6573-415a-99e4-2846971679e4: total 15024 drwxr-xr-x. 2 nova nova 4096 Mar 21 00:37 . drwxr-xr-x. 5 nova nova 4096 Mar 21 00:48 .. -rw-r--r--. 1 qemu qemu 10202 Mar 21 00:40 console.log -rw-r--r--. 1 qemu qemu 15400960 Mar 21 00:46 disk -rw-r--r--. 1 nova nova 79 Mar 21 00:36 disk.info /var/lib/nova/instances/_base: total 539220 drwxr-xr-x. 2 nova nova 4096 Mar 21 00:36 . drwxr-xr-x. 5 nova nova 4096 Mar 21 00:48 .. -rw-r--r--. 1 qemu qemu 3221225472 Mar 21 00:36 afbece9196679001187cff5e6e96ad5425b329e6 /var/lib/nova/instances/locks: total 8 drwxr-xr-x. 2 nova nova 4096 Mar 21 00:36 . drwxr-xr-x. 5 nova nova 4096 Mar 21 00:48 .. -rw-r--r--. 1 nova nova 0 Mar 21 00:36 nova-afbece9196679001187cff5e6e96ad5425b329e6 -rw-r--r--. 1 nova nova 0 Feb 4 20:57 nova-storage-registry-lock [root@compute2 ~]# [root@region1server1 ~(keystone_mark)]# nova show 4f0d7176-6573-415a-99e4-2846971679e4 +--------------------------------------+-----------------------------------------------------------+ | Property | Value | +--------------------------------------+-----------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | compute2.mdickinson.dyndns.org | | OS-EXT-SRV-ATTR:hostname | test-compute2-novolume | | OS-EXT-SRV-ATTR:hypervisor_hostname | compute2.mdickinson.dyndns.org | | OS-EXT-SRV-ATTR:instance_name | instance-00000014 | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-f90oaldt | | OS-EXT-SRV-ATTR:root_device_name | /dev/vda | | OS-EXT-SRV-ATTR:user_data | - | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2017-03-21T04:39:23.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2017-03-21T04:36:28Z | | description | test-compute2-novolume | | flavor | marks.tiny (2a10119c-b273-48d5-b727-57bda760a3d2) | | hostId | 5afd74a02f0332b995921841d985ba3ea39e9a077acbdb3d8a8e9a55 | | host_status | UP | | id | 4f0d7176-6573-415a-99e4-2846971679e4 | | image | Fedora24-CloudBase (152753e9-0fe2-4cc9-8ab5-a9a61173f4b9) | | key_name | marks-keypair | | locked | False | | metadata | {} | | name | test-compute2-novolume | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | ACTIVE | | tags | [] | | tenant-mark-10-0-3-0 network | 10.0.3.22 | | tenant_id | 325a12dcc6a7424aa1f96d63635c2913 | | updated | 2017-03-21T04:39:23Z | | user_id | f833549171d94242b3af5d341b9270de | +--------------------------------------+-----------------------------------------------------------+ [root@region1server1 ~(keystone_mark)]#
As noted above, the image is copied across and made into a local boot image in the _base firectory where it can be shared as a backing file for all instances that need it. The second command shows the instance disk… showing it is using the backing file.
[root@compute2 ~]# file /var/lib/nova/instances/_base/afbece9196679001187cff5e6e96ad5425b329e6 /var/lib/nova/instances/_base/afbece9196679001187cff5e6e96ad5425b329e6: x86 boot sector; partition 1: ID=0x83, active, starthead 4, startsector 2048, 6289408 sectors, code offset 0xc0 [root@compute2 ~]# file /var/lib/nova/instances/4f0d7176-6573-415a-99e4-2846971679e4/disk /var/lib/nova/instances/4f0d7176-6573-415a-99e4-2846971679e4/disk: QEMU QCOW Image (v3), has backing file (path /var/lib/nova/instances/_base/afbece9196679001187cff5e6e96ad542), 3221225472 bytes [root@compute2 ~]#
When all instances on the compute node stop using the image in the _base directory it hangs around for a while, presumably in case it is needed to launce other instances using the same image. I find it gets cleaned up within 30mins of inactivity.
Snapshots are not stored on the compute node the instance was on. A snapshots directory is created on the compute node but left empty, a snapshot in this scenareo is saved as a new “image” file on the main control node. They are not displayed in the projects volume “snapshot” tab.
They are only displayed in the dashboard as being available as selectable bootable images from the dashboard image list (which I do not think they should be as they need a virt-sysprep at least to be usable I would have thought… and users can lose track of snapshots). But at least in this case no cinder storage is used.
Summary
In a home lab environment don’t waster cinder space, launch instances using boot images not volumes, unless you either intend to keep them around for a long time or have a lot of storage allocated to cinder.
And when you are getting low on space do not delete large files from the filesystem directories :-)
When you do make a oops
Not a troubleshooting section, but if you do accidentaly delete the cinder-volumes file the only way to delete the entries known to openstack is (replacing the id with your volume ids of course…
MariaDB [(none)]> use cinder; MariaDB [cinder]> delete from volume_glance_metadata where volume_id="aecee04f-a1b2-4daa-bdf4-26da7c346495"; Query OK, 8 rows affected (0.05 sec) MariaDB [cinder]> delete from volume_attachment where volume_id="aecee04f-a1b2-4daa-bdf4-26da7c346495"; Query OK, 1 row affected (0.03 sec) MariaDB [cinder]> delete from volume_admin_metadata where volume_id="aecee04f-a1b2-4daa-bdf4-26da7c346495"; Query OK, 2 rows affected (0.02 sec) MariaDB [cinder]> delete from volumes where id="aecee04f-a1b2-4daa-bdf4-26da7c346495"; Query OK, 1 row affected (0.22 sec) MariaDB [cinder]> commit; Query OK, 0 rows affected (0.01 sec) MariaDB [cinder]> \q Bye
And then recreate the cinder-volumes file, clean up the pv entry to use it (and kill off any stray lv entries); and well basically recreate the vg environment by hand. It is not hard, and this is not a linux tutorial, and thats not what this post is about :-). I would recomend not deleting it in the first place however.