Mirroring up two spare internal disks under Fedora29

Having had to rebuild another machine, After it was up and running I had a server with the OS installed on the boot disk and two spare internal disks that used to be in a raid array. While the machine supported hardware raid I chose to setup software raid as it gives me more control, and personally I would rather ‘tweak’ with the command line than at the hardware bios level.

Anyway, rather than re-create the wheel, there is a very good article at https://www.tecmint.com/create-raid1-in-linux/ that explains how to mirror up a spare two disks.

I put a LUKs filesystem on mirrored disks and made it a /home directory.

Posted in Unix | Comments Off on Mirroring up two spare internal disks under Fedora29

Interesting issue with creating CentOS7 KVM machines on the F29 OS

The interesting this is that is seems specific to the CentOS7 install media, and the issue is the intsaller cannot detect VIRTIO disks. My OS is Fedora 29 fully patched, the CentOS7 install media I was using is CentOS-7-x86_64-Everything-1503-01.iso

When creating a new KVM instance whether using virt-manager or the virt-install method the CentOS7 installer reports it cannot find the virtio disk. I tried both with a pre-created diskĀ  and letting virt-manager create the disk, both methods failed. After confirming that both methods failed I used pre-created qcow2 disks and virt-install for all further attempts.
I confirmed it was specific to the CentOS7 install media by altering the install media of a failed no-disk-found install using virt-manager to use an old Fedora-Server-DVD-x86_64-23.iso I had lying around and that installer found the disk OK, changed it back to the CentOS7 DVD iso and it could yet again not see the same disk (at no time was the disk altered, only the cdrom boot media).

One disclaimer, once using virt-install with a pre-created disk it did find and use the disk once, but that was once of over twenty attempts and I was trying to create two new KVM servers and eventually had to give up using either virt-manager or virt-install to achieve the creation of the second instance.

The working solution I found was

  • create the new KVM instance using either virt-manager or virt-install, it will not find the disk of course
  • use ‘virsh dumpxml’ to dump out the configuration of the new instance, ie: ‘virsh dumpxml newserver1 > newserver1.xml’
  • delete the failed install instance from virt-manager (do not delete the storage) or ‘virsh destroy’
  • use the dumped xml file to re-create the instance (ie: ‘virsh define newserver1.xml’)
  • use virt-manager to mount the CentOS7 install DVD image again and add the cdrom to the boot order, start the instance, the CentOS7 installer now finds the virtio disk and the install can be performed
  • and remove the cdrom from the boot order, not needed any more

I have not come across this issue before as all my KVM instances would have been build under F27/F28 or earlier where the CentOS-7-x86_64-Everything-1503-01.iso media had no problems with finding the disk during installs.

I was fairly certain the xml define method would work as I had just rebuild the server (after a HD crash) and had used the ‘virsh define’ method to recreate the KVM instances that were there origionally (yes, take backup xml files with each disk image backup) which worked fine for existing CentOS7 instances… before I decided to create another new two instances and hit the issue that I could not create them because new instances could not find the disk(s) assigned.

Frustrating, but I will not examine it further, as I seldom need CentOS images and have worked around the issue.

Posted in Unix | Comments Off on Interesting issue with creating CentOS7 KVM machines on the F29 OS

OwnCloud server on Fedora 29, hmm

I used to run OwnCloud on Fedora, and have finally been able to get it working again now.

The environment is a Fedora29 server running apache.

The main issue is… you cannot get the owncloud server running using the packages in the official Fedora repositories, I think from memory those stopped working around F27 when some renaming went on and it became not owncloud and then back to owncloud between releases; but simply put you will never get it running using the official Fedora repositories.

I tried deleting all packages associated with OwnCloud and re-installing from scratch from the F29 repositories, but still a non-working solution. The error was PROPFIND errors from the server. Basically the Fedora29 repositories do not provide a working OwnCloud server.

It should however be noted that the owncloud-client packages from the Fedora29 repositories do provide a working desktop client, it is only the server files that need to be installed without the Fedora repositories.

Solution, use the official OwnCloud server repository

These are the steps needed to get a working server.

The package owncloud-files from the owncloud repository installes the OwnCloud files into /var/www/html/owncloud, and will use sqllite for the database.

The installation location is where you would expect it to be for a vanilla Fedora website setup where DocumentRoot is set to /var/www/html, I of couse do not have my documentroot configured to point there, so to get it working for my site I simply added the following as a new file /etc/httpd/conf.d/owncloud.conf so the /owncloud URL could find the installed files.

Alias /owncloud /var/www/html/owncloud
<Directory /var/www/html/owncloud>
<IfModule mod_authz_core.c>
# Accessible to all machines in your network
# Replace with “Require local” if you want access to be restricted
# to this machine.
Require all granted
</IfModule>
</Directory>

I left the database as sqllite as from the documentation I couldn’t figure out how to get mariadb setup as the database, so had to use the default sqllite setup. I had previously used mariadb so I know it is possible, I will have to revisit that later (the documentation does contain a database conversion documentation section, I’ll see how it works later); in the meantime I have just dropped the existing mariadb owncloud database I had.

It all looked OK, using a web browser to the http://xxx.xxx.xxx.xxx/owncloud location I was able to add the new admin user, was able to add a new personal group and personal userid.

The good news for Fedora users is that the owncloud-client packages in the Fedora29 repositories do work, and I was able to to connect the Desktop client to the new OwnCloud server installation. Which is when this got frustrating.

Issues or feature changes found

  • configuring to syncronise a directory, /home/mark/owncloud, does not syncronise the directory as expected
    • in the current version it does not syncronise the entire directory, only Documents and Photos (theoretically, have not seen it working). I did manually add another subdirectory to the syncronisation list via the desktop interface which appeared on the server but no syncronisation occurred on the files in that Directory
    • the previous version behaviour was that anything added on the configured client directory /home/mark/owncloud on the client automatically got syncronised to the server
    • as noted above, files and directories added to the configured directory for syncronisation on the desktop were not getting get syncronised to the server, not even from the new subdirectory I had manually configured for replication
    • This syncronisation issue was resolved on one desktop by rebooting the desktop to get a clean restart of the owncloud client even though it was a completely new connection/account that was just connected.
      By resolved I should point out the populated directory on the desktop took it’s syncronisation direction from the server, and the desktop ended up with an empty directory; but when copying files back into that directory on the desktop side (always backup before playing) the files added on the desktop side into my manually configured additional directory were finally replicated onto the server
    • On a second desktop, it was also rebooted and only then started syncronising correctly with the configured directories; does this mean that if the server is taken down for any reason all desktops have to be rebooted ? Anyway an important point here is that the manually added directory was populated on this second desktop and it kept the contents, only checking for changes from the server; so I guess on any new install/upgrade populate the server side directories before attaching a client
  • And one other minor point, using the desktop interface on both desktops while clicking down to subdirectories shows the same contents total filesizes as the same, the top level directory list shows different sizes (ie: on desktop one a subdirectory shows 29MB used but the top level shows only 22MB used, desktop two shows correct sizes. Doesn’t seem to affect usablility
  • It has a list of files to exclude from syncronisation, while this includes pretty much anything starting with a . which is a good default that unfortunately includes things like .gitignore, and while the wildcard patterns could be updated to allow everything bit .g for example, or just remove all exclusions, that would have to be done on every desktop client. However for syncronising normal documents this is not an issue

Does it work as expected

Deleting a test file from Documents on one desktop did result in it being deleted from the second desktop, creating a new file does result in it being replicated to the other desktop, changing the contents of the file does result in the newer file being replicated to the second desktop; so yes it works as expected.

It should be noted that With a small testfile the replication bouncing through the server takes less than 30 seconds so the owncloud desktop clients must be polling the server fairly often so a dedicated web server should probably be used; and frequent updates to large files may cause an issue if many desktops are configured. It is also not recomended to use sqllite for large environments so you will need to read the manuals on how to use mariadb a bit more carefully than I have so far.

For myself I have started using it again for files I want to ensure I have more than one safe copy of but do not want to replicate via the public internet using dropbox.

Update 18 June 2019
A F30 server package for owncloud-files is now available at https://download.owncloud.org/download/repositories/stable/owncloud/index.html, however it seems to be identical to the F29 version and is incompatible with the php version shipped by default with F30. I am still using the Fedora client packages, there are ISV packages for F29 clients at https://software.opensuse.org/download/package?project=isv:ownCloud:desktop&package=owncloud-client but no F30 client packages there yet. The error returned when the client connects to the server is

This version of ownCloud is not compatible with PHP 7.3
You are currently running PHP 7.3.6.

So if you need ownCloud do not upgrade beyond F29; for a while anyway. Also note that WordPress recommends php version 7.3 or higher which as you see I use WordPress means I will not be even investigating downgrading php to get ownCloud working on F30.
Update 20 Nov 2019
There is now a working F30 version available at https://download.owncloud.org/download/repositories/production/owncloud/ should you still be on F30 (version 10.3.1-2). However the F30 OS is out of support next month and there is no F31 package. You cannot upgrade from a previous version so must re-install for F30 which I have done anyway, and it works.
There is a package available for CentOS8 and due to the rapis releases of Fedora (where 3rd party repos just cannot keep up) I will be converting my webserver from Fedora to CentOS so there are unlikely to be further updates here.

Posted in Unix | Comments Off on OwnCloud server on Fedora 29, hmm

F29 and file indexers, how many is too many ?

I use my own home grown monitoring tools on my servers and they picked up something interesting. One of the checks my tools run is to look for processes that are growing in memory size, or are using more memory than I would expect.

They identified a couple of programs, baloo_file and tracker-store. A ps on processes containing tracker gave tracker-store, tracker-miner-fs and tracker-miner apps, so I thought I would see what they were.

Guess what, they are both file indexer programs. Why Gnome under F29 comes with two seperate file indexers I do not know, but it seems like a waste of CPU resources, and as I detected them when their running memory footprints raised warnings in my monitoring toolkit a waste of memory usage as well.

There documentation available for tracker at https://wiki.gnome.org/Projects/Tracker and it indicates that it is tightly tied to the gnome desktop so I have decided to leave that alone for now as it also had the smaller memory footprint.
The baloo documentation is at https://community.kde.org/Baloo and indicates it is used by the KDE plasma desktop.

I have both desktops installed and available via the logon session switcher menu as it took me a while to find a desktop that worked correctly with synergy so had to try quite a few. That may explain why both are installed, but annoyingly it means that reguardless of what desktop I chose both of them start running at user login.

I also do not know how usefull they are, as they are apparently used by applications that hook into them and have no direct command line interface to search (that I can see anyway) so a simple grep and my PDF text search utility are still more usefull to me than these indexers. They may be usefull to users that use file managers and are afraid of the command line however.

As baloo_file had the larger (by far) memory footprint that is the one I decided to get rid of, as I am not using the plasma desktop but the KDE fallback one. The result of that exercise is that you cannot get rid of it as it is tied to the installed plasma desktop. It is theoretically possible to disable it (although I have not rebooted to see if it stays disabled) using the following steps… on a per user basis as baloo is provided to index files under the users home directory and each user has there own configuration file. Fortunately on my machines I am the only user.

(1) vi $HOME/.config/baloofilerc and add to the end of the file the line
Indexing-Enabled=false

(2) stop and disable the baloo process using the balooctl commands as below

[mark@vmhost3 ~]$ balooctl status
Baloo File Indexer is running
Indexer state: Idle
Indexed 273 / 273 files
Current size of index is 32.33 MiB
[mark@vmhost3 ~]$ balooctl stop
[mark@vmhost3 ~]$ balooctl disable
Disabling the File Indexer
[mark@vmhost3 ~]$ balooctl status
Baloo is currently disabled. To enable, please run balooctl enable
[mark@vmhost3 ~]$

It is not possible to remove the baloo packages themselves as they are tied to the plasma desktop

[root@vmhost3 jobdata]# rpm -qa | grep -i baloo
kf5-baloo-5.52.0-2.fc29.x86_64
kf5-baloo-file-5.52.0-2.fc29.x86_64
kf5-baloo-libs-5.52.0-2.fc29.x86_64
baloo-widgets-18.08.1-1.fc29.x86_64
baloo-libs-4.14.3-21.fc29.x86_64
[root@vmhost3 jobdata]# rpm -e kf5-baloo-5.52.0-2.fc29.x86_64 \
   kf5-baloo-file-5.52.0-2.fc29.x86_64 \
   kf5-baloo-libs-5.52.0-2.fc29.x86_64 \
   baloo-widgets-18.08.1-1.fc29.x86_64 \
   baloo-libs-4.14.3-21.fc29.x86_64
error: Failed dependencies:
	kf5-baloo is needed by (installed) plasma-workspace-5.13.5-1.fc29.x86_64
	libKF5Baloo.so.5()(64bit) is needed by (installed) gwenview-1:18.04.3-1.fc29.x86_64
	libKF5Baloo.so.5()(64bit) is needed by (installed) gwenview-libs-1:18.04.3-1.fc29.x86_64
	libKF5Baloo.so.5()(64bit) is needed by (installed) dolphin-libs-18.08.1-1.fc29.x86_64
	libKF5Baloo.so.5()(64bit) is needed by (installed) dolphin-18.08.1-1.fc29.x86_64
	libKF5Baloo.so.5()(64bit) is needed by (installed) plasma-workspace-5.13.5-1.fc29.x86_64
	libKF5Baloo.so.5()(64bit) is needed by (installed) plasma-desktop-5.13.5-1.fc29.x86_64
	libKF5BalooWidgets.so.5()(64bit) is needed by (installed) dolphin-libs-18.08.1-1.fc29.x86_64
	libKF5BalooWidgets.so.5()(64bit) is needed by (installed) dolphin-18.08.1-1.fc29.x86_64
	libbaloopim.so.4()(64bit) is needed by (installed) knode-libs-4.14.10-38.fc29.x86_64

The tracker one I have left running for now as its memory footprint while enough to exceed my alerting threshold was tiny compared to baloo. So at some point I can get around to seeing if it is usefull, for now tracker-store has just been threshold adjusted up and added to my track memory growth monitoring to see if it is actually growing or is fairly static.

Update 18 June 2019, how to disable the tracker processes
To stop Tracker running on F30 (a list of services can be found with ‘ls /usr/lib/systemd/user/tracker*’)
Logged on as your own userid

cd ~/.config/autostart
cp /etc/xdg/autostart/tracker*desktop .
for FILE in $(ls tracker*); do echo "Hidden=true" >> $FILE; done
systemctl --user disable tracker-store.service
systemctl --user disable tracker-miner-fs.service
systemctl --user stop tracker-miner-fs.service
systemctl --user disable tracker-extract.service
systemctl --user disable tracker-miner-rss.service
systemctl --user disable tracker-writeback.service

reboot to confirm they are all stopped; ‘tracker reset -r’ to delete all existing indexes; check all are disabled

[mark@phoenix posts]$ tracker status
Currently indexed: 0 files, 0 folders
Remaining space on database partition: 44.1?GB (56.42%)
All data miners are idle, indexing complete

[mark@phoenix posts]$ tracker daemon
Store:
18 Jun 2019, 09:27:05:    0%  Store                   - Idle 

Miners:
18 Jun 2019, 09:27:05:  ?     RSS/ATOM Feeds          - Not running or is a disabled plugin
18 Jun 2019, 09:27:05:  ?     File System             - Not running or is a disabled plugin
18 Jun 2019, 09:27:05:  ?     Extractor               - Not running or is a disabled plugin
Posted in Unix | Comments Off on F29 and file indexers, how many is too many ?

Openstack Queens, my latest install from RDO

This post is based on the “queens” release of OpenStack available from the RDO distribution site.

It is based upon the documentation available at https://www.rdoproject.org/install/packstack/, but modified to install the services I wish to play with which are not installed by the default allinone packstack install, plus I need to add a second compute node.

It is also important to note this post is from 2018/07/25 and the environment is constantly changing, for example

  • at 2018/05/15 trying to install container support made the environment un-usable, but now it installs
  • at 2018/05/15 there was a lot of manual effort needed to get console support for instances on the second compute node, now console support on the second compute node works ‘out of the box’

The differences between the two dates were discovered as I had to re-install, oops :-(. I had to re-install and discovered the two new ‘improvements’. Immediately prior to that need to re-install I also found there was no reliable documentation on how to use mariadb commands to delete a obsolete compute node, the latest seem to be those that worked well for ocata but not for queens… although the actual error was no cell entries found on a compute node after a new compute node did appear to successfully autoconnect/discover. Anyway, I re-installed.

Packstack by default builds and all-in-one environment, if that is what you want just follow the documentation link mentioned above. It will not install container and heat support so if you want those you may want to read on to see how to change the answers file anyway.

This post is because origionally I wanted to investigate how instances launched on different compute hosts could communicate with each other in their private network range using OpenVSwitch… which obviously needs at least two hosts. And as time goes on I find I need the second compute node, but that is way at the end of the post.

Origionally I used OpenVSwitch across a second ethernet device on each server using a non 192.168.1.0/24 network… which worked OK but added another level of complexity and was not really required; so for simplicity this walkthrough uses only one network card on each server and both cards on on my internal network 192.168.1.0/24 network. Mentioned as a lot of documentation recomends you do it using a private network for OpenVSwitch to communicate between launched cloud instances and if you have a large lab by all means do so, but I am simplifying my home network and this post reflects that… it also makes the post a lot smaller :-).

You will also note during this walkthrough that there is a lot of rebooting and editing of the configuration file and rerunning the install, this is required, using packstack to generate an answers file and attempting to customise that answers file to setup everything you want in one install step will consistently fail and require restarting from the ‘create two new VMs’ step.

This post covers a sucessful install perfomed on a standard intel 4xdual core (8 threads) desktop with 32Gb of memory installed; with a 1Tb spinning disk which is more than enough. The two KVM instances are allocated a total of 23Gb between them which is enough for a well responding working installation. A desktop with these specs can easily be obtained from a custom PC builder for under $1200NZ if ordered without an OS (ie: no expensive windoze) and you install linux.

Step 1 – Build two CentOS7 VMs

You need to create two (or more) CentOS7 virtual machines, both with the minimum package install to keep the footprint small.

  • region1server1 : static ip-address, 15Gb ram (minimum), virtual disk a minimum of 50Gb (a lot larger if you will be taking snapshots). This will be used as the control and network node server
    as it is the control node it will also host all the disk images to be deployed and the snapshots, plus if you are using swift storage packstack will create a 20Gb (default) filesystem on this disk, so make it larger not smaller :-).
  • region1server2 : static ip-address, 8Gb ram (more if you plan to run a lot of instances), virtual disk a minimum of 30Gb or more. This will be the compute node.
    As it is a virtual machine you can easily shut it down and reconfigure it with more memory later if you need… but the disk space allocated will need to be enough to hold the virtual disk images of the machines you launch on that compute node so you may want to bump disk space up.
  • On both you should allocate swap space; I use swapfiles so they can easily be adjusted however from experience once swapping starts to occur everything will grind to a halt so use swapspace as an indicator only… if it is used allocate more real memory to the VMs as soon as possible.

Note that I am using a 192.168.1.0/24 network with 192.168.1.172 as region1server1 (all services) and 192.168.1.162 as region1server2 (a dedicated compute node). The external gateway for both must be a valid gateway as you need the two servers to access the inetrnet to download packages.

At this pont just check the configuration files /etc/sysconfig/network-scripts/ifcfg-eth0 (or ifcfg-en0 depending on what sort of network adapter you configured). You should expect to see something like the below.

On server1

TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
PEERDNS="yes"
PEERROUTES="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
IPV6_FAILURE_FATAL="no"
NAME="eth1"
HWADDR=52:54:00:34:3A:E7
DEVICE="eth0"
ONBOOT="yes"
IPADDR=192.168.1.172
NETMASK=255.255.255.0
BROADCAST=192.168.1.255
GATEWAY=192.168.1.1
DNS1=192.168.1.172
DNS2=192.168.1.1

On Server2

TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
NAME="eth0"
HWADDR=52:54:00:7F:56:85
DEVICE="eth0"
ONBOOT="yes"
DNS1="192.168.1.172"
DNS2="192.168.1.1"
IPADDR="192.168.1.162"
PREFIX="24"
GATEWAY="192.168.1.1"
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
IPV6_PRIVACY="no"

You need to switch from using NetworkManager to network (as that is documented as needing to be done for the allinone install), and while the servers are still at a minimal software install this is probably a good time to ensure all the packages installed are up to date. Note that this is recomended from the install documentation, the packstack install does seem to specifically check and configure NetworkManager to be enabled so a bit of a conflict there, but do this for the install anyway.

Also note that firewalld must be disabled. OpenStack uses some rather complex iptables rules and we do not want firewalld getting in the way. net-tools and fuser I always install for trouble-shooting.

As the root user

systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl enable network
yum -y update
yum -y install net-tools psmisc    # for ifconfig and fuser
yum -y install screen              # needed if running packstack on a non-console session

This would be a good time to reboot both servers to both pick up the latest packages just installed and to make sure the interface on each server comes up with the correct address now you are using “network” instead of NetworkManager; especially as a rather major change to the network configuration will be made a little later and you need to be confident both virtual servers are networked correctly before getting to that point.

After rebooting, redo the steps to disable network manager, because the update re-enabled NetworkManager which we do not want.

systemctl disable NetworkManager
systemctl enable network

Step 2 – Add RDO repositories and configure OpenVSwitch

OpenVSwitch is not available from the CentOS7 repositories and needs to be installed fromthe RDO repositories, so the network configuration changes needed for OpenVSwitch are done at this step not in the prior step when the servers were built. We also want to pick up any packages from the RDO repository that may replace the CentOS7 ones.

As root on both virtual servers, and yes you do need the yum update again as the openstack repository may need to replace CentOS packages.

yum install -y centos-release-openstack-queens
yum -y update
yum -y install openvswitch

You now need to only on the region1server1 server cd to /etc/sysconfig/network-scripts. You will replace the ifcfg-eth0 shown above with an updated ifcfg-eth0 and a new ifcfg-br-ex (for the openvswitch bridge). So edit/create the files on the first virtual server similar to the below… using your machines HWADDR of course.

The ifcfg-eth0 will now look like

DEVICE="eth0"
BOOTPROTO="none"
TYPE="OVSPort"
OVS_BRIDGE="br-ex"
ONBOOT="yes"
DEVICETYPE=ovs
HWADDR-52:54:00:34:3A:E7

The ifcfg-br-ex will look like

TYPE="OVSBridge"
DEVICE="br-ex"
BOOTPROTO="static"
DEVICETYPE=ovs
ONBOOT="yes"
IPADDR="192.168.1.172"
PREFIX0="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.172"
DNS2="192.168.1.1"
DEFROUTE="yes"
PEERDNS="yes"
PEERROUTES="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
IPV6_FAILURE_FATAL="no"
USERCTL=yes

After the changes reboot the region1server1 first virtual server; the br-ex interface will now have the servers ip-address assigned and bound to eth0.

[root@region1server1 ~]# ovs-vsctl show
dc3364e6-b4bc-4702-9df0-48cb60e8abcd
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.9.0"

Also reboot the second virtual server as we need it to be restarted with the openvswitch package available for use.

The br-ex interface only needs to be created on the first server as that is the server that will be providing the networking functions to all the compute nodes.

Step 3 – Add hosts entries

This is a seperate step to highlight that it must be done, unless your two new virtual servers are in a DNS zone. When packstack installs rabbitmq it must be able to resolve an ip-address to a hostname or the install will fail. I have been caught out by that many times.

In the /etc/hosts file on both virtual servers add an entry for the host and the second host; server1 and server2 must be able to resolve their own names and the other servers name. Use fully qualified domain names, not short names.

Thats so obvious to edit I won’t waste space with pasting an example here.

Step 4 – Install packstack, build allinone, then use answers file, repeatedly

The actual packstack commands should be run from a screen bash session (yum -y install screen;screen bash) as if you need to run it more than once packstack activities reset the network connection so if you are running it from a non-screen ssh session you will get the command killed. Using screen you can just ssh back into the server and use “screen -r” to re-attach to the running packstack command session.

Also I have found that the changes to the configuration file do need to be done one at a time with reboots between them. While it seems reasonable that you could just change all the values in the configuration file in one edit and then run packstack from experience that will not work. The steps below do work, for me anyway.

On the first server only we need to install packstack and build the allinone environment using all the defaults.

As root on the first server only

yum install -y openstack-packstack
packstack --allinone

This should complete successfully, when done reboot.

Log back in, in the root home directory there will be an answers file created, names similar to packstack-answers-20180514-162925.txt; copy that aside to something like answers1.txt which will be the configuration file name we will be using from this point.

Edit the file, set CONFIG_HEAT_INSTALL to y

As root

packstack --answer-file=answers1.txt

This should complete successfully, when done reboot.

Log back in and edit your answers file again

Change the entry CONFIG_COMPUTE_HOSTS !. We will be using our second server as an additional compute host so I change CONFIG_COMPUTE_HOSTS=192.168.1.172 to be CONFIG_COMPUTE_HOSTS=192.168.1.172,192.168.1.162 and yes I do want the control host to be included as a compute host at this time

As root

packstack --answer-file=answers1.txt

This should complete successfully, when done reboot both VMs.

Log back in and edit your answers file again

Change the admin password entry to something you can remember, ie: CONFIG_KEYSTONE_ADMIN_PW=password
Change the mariadb password likewise, ie: CONFIG_MARIADB_PW=password

As root

packstack --answer-file=answers1.txt

This should complete successfully, when done reboot both VMs.

Edit /etc/neutron/plugins/*/ml2_conf.ini, change mechanism_drivers=openvswitch to mechanism_drivers=openvswitch,l2population then reboot both servers again.

And it is done, you have a working system with two compute nodes with heat installed… but no container infrastructure.

As of 2018/07/25 change the answers file to set CONFIG_MAGNUM_INSTALL=n to =y and rerun the packstack command to install container support. (Important: at 2018/05/15 installing container support at any point using packstack consistently caused the install to fail and both VMs to be basically un-usable… this now appears to be fixed but try at your own risk).

Step 5 – Customise, using the dashboard to make it easier

Log onto the dashboard as the admin user using the password you set in the configuration file, or if you did not set one it will be in the file keystonerc_admin in the root directory of the server you ran the packstack install from.

  • only one external network seems to be supported. Use the dashboard to display (and note down) the details of the external network created by the default “demo” environment (specifically the external device name, normally “extnet”)… which is a 172.something network which we don’t want anyway
  • you cannot delete that external network while it is referenced anywhere; delete the demo user, demo project, the network router, then the external network.
  • Then create a new external network (external network and shared must be ticked, actually tick all boxes), in my environment that is a flat network using the “extnet” with no dhcp allowing an ip range of 192.168.1.240,192.168.1.250 (for floating ips) which is outside the range used by my physical network currently.
  • Add a new user for yourself, as part of adding the new user you are also able to create a new project for the user. The user must be an admin user if it is to be permitted to associate floating ips to instances

Log onto the dashboard as the new user you have created.

  • create a new non-shared network for that tenant user (ie: 10.0.1.0/24 with a ip range of 10.0.1.10,10.0.1.250 ensuring you do not use the full range of .1 to .254 or network creation will fail as it needs a few addresses free for gateway etc.
  • create a router for that tenant user attached to the “shared” external network you created above (if it is not visible as an option you forgot to make it shared)
  • examine the router details and under the interfaces section select add a new interface, attach your new personal network you have just created
  • Under the network tab there will be an option for security group rules, create a new rule to allow ssh and all ICMP inbound. When you launch instances if you need to troubleshoot them (or want to ssh into them or ping them) you can add this rule as an additional rule when you launch an instance. Note: rules can be added/removed from running instances if you do not want to do this at instance launch time

At this point everything should just work for you… with one extra step to be aware of that was mentioned above. You may launch an instance, assign a floating ip, and not be able to ping it from your external network. Thats normal, the default network security group rules do not allow inbound traffic. You will need a custom security group like that mentioned above to permit that.

Additional customisations I always do

  • Launch an instance as my tenant user to use as a gateway (I use a F25 cloud image as it only needs a 3Gb disk and 512Mb memory). It will launch by default on the main region1server1 (15Gb) machine which it thinks has the most free memory. Then go to the admin/hypervisor option and disable the region1server1 compute node.leaving only the second compute node available for new instances. The reason for that is that region1server1 will now have used almost all of its 15Gb of memory (top shows me 440k free on that machine now) but openstack does not take into account anything other than instances launched by itself so still thinks there is 14Gb+ free on that server to launch instances into which will obviously cause problems if left enabled. I want my gateway on that server as that server provides the networking to other compute nodes, so is the logical place to put it as if that region1server1 server is down no instances could be accessed anyway
  • assign a floating ip address to that gateway instance. For linux users the reason is obvious. On all linux desktops that may want to work with instances just add a route to that gateway. For example if the floating ip assigned was “192.168.1.246” then the command “route add -net 10.0.1.0/24 gw 192.168.1.246” would allow those linux desktops to ssh (or anything else a port was opened for) to instances launched on my 10.0.1.0/24 network… or simply put all instances on the 10.0.1.0/24 network can be access via one floating ip address; which is why my floating ip address range assigned in the examples above was so small, for my personal home use I only need one, maybe 2
  • “if you use nagios; as the root user on both CentOS7 VMs yum -y install epel-release” then “yum -y install nrpe nagios-plugins-all”, then “systemctl enable nrpe”, then create a script that runs after all the openstack services have started and created their iptables rules “iptables -I INPUT -p tcp –dport 5666 -m conntrack –ctstate NEW,ESTABLISHED -j ACCEPT” to allow your nagios server to run nrpe checks on the two VMs

Known remaining issues after install

  • when trying to setup networks via the web interface the availability zone “nova” seems to be hard coded as the only option even if additional availability zones have been created; this is a change as in Ocata (the last release I had stable) manually created availability zones could be used
  • when creating users please note that only users created with the “admin” role can associate floating ips with instances, this is a change from earlier releases. There is discussion on whether this is a bug or lack of documentation at https://bugs.dogfood.paddev.net/horizon/+bug/1394034 should you be curious
Posted in OpenStack | Comments Off on Openstack Queens, my latest install from RDO

The Jan/Feb updates to Fedora 27 really broke VMs

My specific environment is that the VM machines I am having issues with are all CentOS7 or Fedora27 VM guests running on Fedora27 host machines and all are managed by virsh.

I now have a novel and very irritating problem on the servers I use for VM hosts, probably specific to Redhat/CentOS/Fedora which has no available setting to limit the amount of cache assigned for IO, so it will use all memory available if it can.

The problem which has appeared since I last upgraded the kernel is that normal background disk IO being placed into memory cache now seems to take priority over running active processes wanting to use memory.

The visible symptoms are

  • KVM virtual machines are being swapped out to disk swapfile space, to the point they stop responding completely, even to ping
  • Processes running on the host machine are being killed by the operating system due to lack of memory
  • 50% of the real memory on the machine is allocated to disk IO cache

Excessive cache usage for IO is nothing new, I investigated that long ago to find out there is no way of controlling the percentage of memory used for cache on redhat based systems, to the point I had to include a daily “/usr/sbin/sysctl vm.drop_caches=3” command in cron to flush some of that cache.

However until a few months ago I could run five VMs on my main system which would run indefinately… now I can only run three VMs and eventually one or more of them is after two to three days moved far enough into swap space where they just freeze, not even a “virsh reboot” command can recover it… plus of course there is the additional complication that the OS on the VM host starts killing off some running processes which is also new.

I have altered my webserver instance to use 1Gb memory instead of the 2Gb it was origionally using to see if the host can keep it responding for longer.

A host with 8Gb of memory, the entire host was rebooted yesterday and I have already had a VM freeze, now running three VMs assigned a total of 3.5Gb of that memory (actually using 3.5Gb of real memory and two of them combined are also now using 100Mb of swap (the third I have just had to restart again a few minutes ago so hasn’t swapped yet)), 4Gb of memory used by “buff/cache” so running processes are moving into swap space in preference to IO being flushed from cache… in an ideal world it would flush cache to avoid swapping out running processes; RedHat based systems do not do that however; or at least do not do it on demand when memory is needed, I assume they do a reclaim periodically but that is no use at the time it is needed.

My test machine which is a host of 32Gb of memory with 23Gb assigned to two VMs seems better behaved in that “buff/cache” is using 3Gb (I have seen it over 10Gb often) but is needing to use 4Gb of swap (yes, the qemu-system-x86 processes are using 3Gb of swap combined between them as well as 25Gb of real memory, these have been running 11 days) which should not happen.

Anyway, this post is because this issue of VMs completely freezing is new behaviour that has occurred only during the last three months; and it annoys me.

I now have the “/usr/sbin/sysctl vm.drop_caches=3” running every two hours from cron in the hope it will keep the VMs running longer, even if it does cause a major system slowdown while cache is being flushed.

Posted in Unix | Comments Off on The Jan/Feb updates to Fedora 27 really broke VMs

New Fedora quirks, SAR and the logger program

SAR

Noticed SAR had not been collecting statistics for a while, looks like it actually stopped around the time I upgraded to F26 rather than being a F27 issue.

Systemctl showed the sysstat.service was running, just not producing any data. There were old sarNN files in /var/log/sa but not current sarNN files, and no saNN files.

A bit of googling and a wild guess… deleting all the old files in /var/log/sa and a “systemctl restart sysstat” seems to have fixed that, at least there is a saNN file being written for today.

The logger program in F27 only works for the root user

Running logger as anybody other than root fails to log any messages (note selinux is permissive and there are no audit denies anyway). Running as root logs OK.

This is new behaviour, some of my system health check scripts use logger to report OK, and the OK messages are checked for by nrpe plugins (via nagios/nrpe). The issue was picked up as the nrpe checks were unable to find any matching messages in /var/log/messages, as logger was just not writing them… as of course like any sensible user check scriots do not need to run as root.

My initial plan for a quick fix was the thought that I shold just turm the SUID bit on logger and look into the issue later, as it isomething that should be available to any iser that wants it… but the SUID bit could not be set !?!?!. As seen below, no change.

[root@vosprey2 ~]# which logger
/usr/bin/logger
[root@vosprey2 ~]# cd /usr/bin
[root@vosprey2 bin]# ls -la logger
-rwxr-xr-x. 1 root root 49616 Sep 22 20:37 logger
[root@vosprey2 bin]# chmod o+s logger
[root@vosprey2 bin]# ls -la logger
-rwxr-xr-x. 1 root root 49616 Sep 22 20:37 logger
[root@vosprey2 bin]# 

So my current workaround is to add /usr/bin/logger to the sudoers file for my userid as that is the userid my cron jobs run under and alter the scripts that use logger to sudo /usr/bin/logger.

Defaults:mark !requiretty
mark vosprey2=NOPASSWD: /usr/bin/logger

Not ideal, but it will do until I figure out whu non-root users can no logger use logger.

Posted in Unix | Comments Off on New Fedora quirks, SAR and the logger program

Interception of Skype calls ?

Interesting article on the register about Microsoft being fined for not being able to intercept skype calls back in 2012. Microsofts appeal failed and the fine stands.

The reason that Microsoft was unable to intercept the skype calls for the law enforcement agencies as skype in those days was a peer-to-peer model where Microsoft itself was not involved in the communication once the communication started.

It also explains why since then Microsoft has redesigned the Skype product so all communication data must now flow through Microsoft owned servers; it can only be to now allow it to intercept/record skype conversations when required by law enforcement to avoid further fines, or whenever it feels like it as companies tend to change terms and conditions as needed to suit themselves these days.

There can be no other reason other than the ability to intercept the calls that would require the redesign made to pass all traffic through microsoft servers, as it obviously adds extra network hops and results in degraded performance, even assuming all network components passed through in the additional network hops are not impacting throughput from the effect of other telco monitoring.

While I’m sure the call contents are supposedly encrypted it is now ideally positioned for the day when the US govt makes backdoors manatory.

But it is a free service for most people, and nobody is forcing people to use skype. If anybody uses a service (even a paid one) by a US company interception and recording plus adds should be accepted as a fact of life. Just don’t use any of them for anything confidential.

Posted in Uncategorized | Comments Off on Interception of Skype calls ?

Upgrade to Fedora27 issues so far

These are the issues I had personally, as every user has a different setup your distance may vary :-). They are covered in detail below.

  • Desktop upgrade does not work, no big deal (workaround works)
  • Network manager misbehaves (unsolved)
  • Bacula database tables must be created from scratch, the table upgrade script does not result in a usable database (recreating all tables works)
  • mariadb-libs package did not upgrade correctly. After a “dnf reinstall mariadb-libs” the package appears to be installed correctly, but the library files are still causing errors (unsolved)
  • nrpe plugins, as always some stop working (unsolved, have to write replacements)
  • further things yet to discover :-)

Desktop users have to do it from the command line as well

One of my VM hosts has a full Gnome install, and it popped up the Fedora27 now available upgrade window. So I used it, after the reboot it said it was upgrading but it restarted still on F26… and popped up the upgrade window that I used again, after the reboot it said it was upgrading and after the reboot it was still on F26.

So I stopped wasting time and used the command line method with the dnf system-upgrade commands, that sucessfully upgraded to F27.

NetworkManager will always start

I had network issues on one host, so I disabled NetworkManager, but with NetworkManager disabled the blasted thing starts always anyway.

[root@vmhost3 ~]# systemctl status NetworkManager
? NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; vendor preset: enabled)
   Active: active (running) since Sun 2017-11-19 10:13:36 NZDT; 2min 31s ago
     Docs: man:NetworkManager(8)
 Main PID: 1323 (NetworkManager)
    Tasks: 3 (limit: 4915)
   CGroup: /system.slice/NetworkManager.service
           ??1323 /usr/sbin/NetworkManager --no-daemon

Nov 19 10:14:23 vmhost3 NetworkManager[1323]:   [1511039663.2271] device (virbr0): bridge port virbr0-nic wa
Nov 19 10:14:23 vmhost3 NetworkManager[1323]:   [1511039663.2271] device (virbr0-nic): Activation: connectio
Nov 19 10:14:23 vmhost3 NetworkManager[1323]:   [1511039663.2272] device (virbr0): state change: secondaries
Nov 19 10:14:23 vmhost3 NetworkManager[1323]:   [1511039663.2273] manager: NetworkManager state is now CONNE
Nov 19 10:14:26 vmhost3 NetworkManager[1323]:   [1511039666.6732] device (virbr0): Activation: successful, d
Nov 19 10:14:26 vmhost3 NetworkManager[1323]:   [1511039666.6772] device (virbr0-nic): state change: ip-conf
Nov 19 10:14:26 vmhost3 NetworkManager[1323]:   [1511039666.6780] device (virbr0-nic): state change: seconda
Nov 19 10:14:26 vmhost3 NetworkManager[1323]:   [1511039666.6783] device (virbr0): bridge port virbr0-nic wa
Nov 19 10:14:26 vmhost3 NetworkManager[1323]:   [1511039666.6784] device (virbr0-nic): released from master 
Nov 19 10:14:54 vmhost3 NetworkManager[1323]:   [1511039694.5366] bluez: use BlueZ version 5
[root@vmhost3 ~]# 

That does not appear to be causing my issue however as my network-scripts are configured not to use NetworkManager and I have “network” enabled; but it does not appear to be working correctly on that one host. And doing a systemctl restart network (which I would hope leaves NetworkManager out of it) does not fix the issue.

[root@vmhost3 network-scripts]# grep -i DNS ifcfg*
ifcfg-br0:DNS2="192.168.1.179"
ifcfg-br0:DNS1="192.168.1.181"
ifcfg-br0:DNS3="192.168.1.1"
ifcfg-enp2s0:IPV6_PEERDNS="no"
ifcfg-enp2s0:PEERDNS="yes"
ifcfg-enp2s0:IPV6_PEERDNS="yes"
[root@vmhost3 network-scripts]# grep -i GATE ifcfg*
ifcfg-br0:GATEWAY0="192.168.1.1"
ifcfg-lo:# If you're having problems with gated making 127.0.0.0/8 a martian,
[root@vmhost3 network-scripts]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.1.181
nameserver 192.168.1.179
[root@vmhost3 network-scripts]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 br0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 br0
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
[root@vmhost3 network-scripts]#
[root@vmhost3 network-scripts]# uname -a
Linux vmhost3 4.13.12-300.fc27.x86_64 #1 SMP Wed Nov 8 16:38:01 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Worse, it is not consistently possible to manually correct the blasted thing. Using “route add default gw 192.168.1.1” worked after the fourth reboot; prior to that it made no difference to the routing table.

The /etc/resolv.conf has to be manually edited to add the third DSN nameserver entry.

On a second host that has been upgraded networking is setup almost correctly, it adds two entries for the default route instead of only one expected but it seems to work OK.

[root@vmhost1 network-scripts]# grep -i DNS ifcfg*
ifcfg-br0:DNS1="192.168.1.181"
ifcfg-br0:DNS2="192.168.1.179"
ifcfg-br0:DNS3="192.168.1.1"
ifcfg-em1:IPV6_PEERDNS="no"
ifcfg-em1:PEERDNS="yes"
[root@vmhost1 network-scripts]# grep -i GATE ifcfg*
ifcfg-br0:GATEWAY0="192.168.1.1"
ifcfg-lo:# If you're having problems with gated making 127.0.0.0/8 a martian,
[root@vmhost1 network-scripts]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.1.181
nameserver 192.168.1.179
nameserver 192.168.1.1
[root@vmhost1 network-scripts]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 br0
0.0.0.0         192.168.1.1     0.0.0.0         UG    425    0        0 br0
192.168.1.0     0.0.0.0         255.255.255.0   U     425    0        0 br0
[root@vmhost1 network-scripts]# uname -a
Linux vmhost1 4.13.12-300.fc27.x86_64 #1 SMP Wed Nov 8 16:38:01 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@vmhost1 network-scripts]#

The only difference between the configuration files on the two hosts is the interface names and the UUID/HWADDR of the entries so both servers should have the same result, but they don’t.

Bacula Director Database Update

No problems running the normal mysql_upgrade script but there are issues with bacula if you are running mariadb in safe mode.

The upgrade script for mariadb needs work. In the file /usr/libexec/bacula/update_mysql_tables you must search for the line “UPDATE Version SET VersionId=16;” and wrap two lines about it thus…

SET SQL_SAFE_UPDATES=0;
UPDATE Version SET VersionId=16;
SET SQL_SAFE_UPDATES=1;

This avoids the error “ERROR 1175 (HY000): You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column” which prevents the version number being updated.

However it remained broken. It looks like the upgrade is a no-go, and I will have to try recreating all the tables from scratch, Backups running after the mysql upgrade scripts completed sucessfully stil failed with database errors…

21-Nov 17:00 bacula-dir JobId 0: Fatal error: sql_create.c:84 Create DB Job record INSERT INTO Job (Job,Name,Type,Level,JobStatus,SchedTime,JobTDate,ClientId,Comment) VALUES ('BackupVMhost3.2017-11-21_17.00.00_12','BackupVMhost3','B','I','C','2017-11-21 17:00:00',1511236800,11,'') failed. ERR=Field 'StartTime' doesn't have a default value
21-Nov 17:00 bacula-dir JobId 0: Fatal error: sql_create.c:84 Create DB Job record INSERT INTO Job (Job,Name,Type,Level,JobStatus,SchedTime,JobTDate,ClientId,Comment) VALUES ('BackupPuppet.2017-11-21_17.00.01_13','BackupPuppet','B','I','C','2017-11-21 17:00:01',1511236801,13,'') failed. ERR=Field 'StartTime' doesn't have a default value
21-Nov 23:55 bacula-dir JobId 0: Fatal error: sql_create.c:84 Create DB Job record INSERT INTO Job (Job,Name,Type,Level,JobStatus,SchedTime,JobTDate,ClientId,Comment) VALUES ('BackupCatalog.2017-11-21_23.55.00_14','BackupCatalog','B','F','C','2017-11-21 23:55:00',1511261700,7,'') failed. ERR=Field 'StartTime' doesn't have a default value

That did actually supprise me as the upgrade scripts for bacula normally work; but not this time.

The solution unfortunately is to stop bacula-dir and bacula-sd, delete all the tables in the bacula database and recreate it from scratch using /usr/libexec/bacula/make_mysql_tables; plus you must also delete everything from /var/spool/bacula except the catalog backup script or custom scripts ypu have placed there. Also as you are starting from scratch delete all the existing backup volumes from the directory your bacula-sd storage process is configured to use.

Then you can restart bacula-sd and bacula-dir. And unfortunately as you no longer have any backups you must then run backups for each of your servers which will be full backups on the first run for each. You could wait until the backups are scheduled to run but as they will be full backups on the first run it is probably better to manually do them at a time when you know network congestion is at a minimum.

But at least backups are working again.

The HTTP service

As always, if you need to share files between webapps and scripts, vi /usr/lib/systemd/system/httpd.service and change PrivateTmp=true to PrivateTmp=false.

Mariadb library issues

While mariadb is running OK and the mysql_upgrade script had no problems, there appears to be something wrong with the mariadb-libs packaging, certainly not all the files rpm -ql showed for the package existed after the upgrade to F27.

This issue was found in investigating issues with the NRPE mysql plugin so refer to that section for details on the issues with mariadb-libs. At this point in time I have not found a resolution.

Note that a “dnf reinstall mariadb-libs” did create all the expected files, but even though the ld.conf.so.d file for mysql is correct even after rerunning ldconfig the mysql library files are not found by programs wanting to use them; at least not found by th nrpe plugin which is what I am trying to get working first.

NRPE plugins

As always the systemd startup scripts for NRPE have to be edited to turn off the requirement to use SSL, thats no real issue as I have gotten used to that.

And as always more of the supplied plugins have stopped working. The HTTP and MYSQL ones have stopped working this time.

The HTTP plugin

The HTTP plugin is triggering error 400’s from the server for no reason that I can determine. As seen in the log clip the server responds OK to non-nrpe requests.

192.168.1.170 - - [21/Nov/2017:09:47:20 +1300] "GET / HTTP/1.0" 400 226 "-" "check_http/v2.2.1 (nagios-plugins 2.2.1)"
192.168.1.170 - - [21/Nov/2017:09:52:20 +1300] "GET / HTTP/1.0" 400 226 "-" "check_http/v2.2.1 (nagios-plugins 2.2.1)"
192.168.1.179 - - [21/Nov/2017:09:54:34 +1300] "GET / HTTP/1.1" 200 2285 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0"

Using telnet to “GET / HTTP/1.0” and “GET / HTTP/1.1” manually also get an error 400 (although using just “GET /” works OK); so pherhaps a behaviour change in HTTP itself… although Firefox has no issue retrieving the page with the full string.

The MYSQL plugin, possibly an issue with mariadb packages and not the plugin ?

[root@vosprey2 httpd]# /usr/lib64/nagios/plugins/check_mysql
/usr/lib64/nagios/plugins/check_mysql: error while loading shared libraries: libmysqlclient.so.18: cannot open shared object file: No such file or directory

RPM says that file is installed as part of mariadb-libs… which is installed but that file is not.

[root@vosprey2 httpd]# dnf provides libmysqlclient.so.18
Last metadata expiration check: 0:17:01 ago on Tue 21 Nov 2017 10:22:24 NZDT.
mariadb-libs-3:10.2.9-3.fc27.i686 : The shared libraries required for MariaDB/MySQL clients
Repo        : fedora
Matched from:
Provide    : libmysqlclient.so.18

[root@vosprey2 httpd]# rpm -qa | grep -i mariadb-libs
mariadb-libs-10.2.9-3.fc27.x86_64

[root@vosprey2 httpd]# rpm -ql mariadb-libs
/etc/ld.so.conf.d/mariadb-x86_64.conf
/etc/my.cnf.d/client.cnf
/usr/lib/.build-id
/usr/lib/.build-id/a5
/usr/lib/.build-id/a5/c17835b924d9830cc6b91764d3c3e80f650d05
/usr/lib64/mysql/libmariadb.so.3
/usr/lib64/mysql/libmysqlclient.so.18

[root@vosprey2 httpd]# cd /usr/lib64/mysql
[root@vosprey2 mysql]# ls -la
total 424
drwxr-xr-x.  3 root root   4096 Nov 22 10:07 .
dr-xr-xr-x. 85 root root  69632 Nov 19 13:33 ..
-rw-r--r--.  1 root root   6773 Oct  6 12:18 INFO_BIN
-rw-r--r--.  1 root root    172 Sep 25 19:33 INFO_SRC
lrwxrwxrwx.  1 root root     15 Oct  6 12:18 libmariadb.so -> libmariadb.so.3
-rwxr-xr-x.  1 root root 338792 Oct  6 12:22 libmariadb.so.3
lrwxrwxrwx.  1 root root     13 Oct  6 12:18 libmysqlclient_r.so -> libmariadb.so
lrwxrwxrwx.  1 root root     13 Oct  6 12:18 libmysqlclient.so -> libmariadb.so
drwxr-xr-x.  2 root root   4096 Nov 19 13:14 plugin

A re-install of the mariadb-libs package does create the required symbolic link. It does not fix the issue which is unfortunate :-(.


[root@vosprey2 mysql]# dnf reinstall mariadb-libs
Last metadata expiration check: 2:42:08 ago on Wed 22 Nov 2017 07:25:33 NZDT.
Dependencies resolved.
=========================================================================================================
 Package                   Arch                Version                         Repository           Size
=========================================================================================================
Reinstalling:
 mariadb-libs              x86_64              3:10.2.9-3.fc27                 fedora              150 k

Transaction Summary
=========================================================================================================

Total download size: 150 k
Is this ok [y/N]: y
Downloading Packages:
mariadb-libs-10.2.9-3.fc27.x86_64.rpm                                    293 kB/s | 150 kB     00:00    
---------------------------------------------------------------------------------------------------------
Total                                                                    101 kB/s | 150 kB     00:01     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                 1/1 
  Reinstalling     : mariadb-libs-3:10.2.9-3.fc27.x86_64                                             1/2 
  Running scriptlet: mariadb-libs-3:10.2.9-3.fc27.x86_64                                             1/2 
  Erasing          : mariadb-libs-3:10.2.9-3.fc27.x86_64                                             2/2 
  Running scriptlet: mariadb-libs-3:10.2.9-3.fc27.x86_64                                             2/2 
  Verifying        : mariadb-libs-3:10.2.9-3.fc27.x86_64                                             1/2 
  Verifying        : mariadb-libs-3:10.2.9-3.fc27.x86_64                                             2/2 

Reinstalled:
  mariadb-libs.x86_64 3:10.2.9-3.fc27                                                                    

Complete!
[root@vosprey2 mysql]# ls
INFO_BIN  libmariadb.so    libmysqlclient_r.so  libmysqlclient.so.18
INFO_SRC  libmariadb.so.3  libmysqlclient.so    plugin
[root@vosprey2 mysql]# ls -la
total 424
drwxr-xr-x.  3 root root   4096 Nov 22 10:07 .
dr-xr-xr-x. 85 root root  69632 Nov 19 13:33 ..
-rw-r--r--.  1 root root   6773 Oct  6 12:18 INFO_BIN
-rw-r--r--.  1 root root    172 Sep 25 19:33 INFO_SRC
lrwxrwxrwx.  1 root root     15 Oct  6 12:18 libmariadb.so -> libmariadb.so.3
-rwxr-xr-x.  1 root root 338792 Oct  6 12:22 libmariadb.so.3
lrwxrwxrwx.  1 root root     13 Oct  6 12:18 libmysqlclient_r.so -> libmariadb.so
lrwxrwxrwx.  1 root root     13 Oct  6 12:18 libmysqlclient.so -> libmariadb.so
lrwxrwxrwx.  1 root root     15 Oct  6 12:18 libmysqlclient.so.18 -> libmariadb.so.3
drwxr-xr-x.  2 root root   4096 Nov 19 13:14 plugin
[root@vosprey2 mysql]# /usr/lib64/nagios/plugins/check_mysql
/usr/lib64/nagios/plugins/check_mysql: error while loading shared libraries: libmysqlclient.so.18: cannot open shared object file: No such file or directory

[root@vosprey2 mysql]# cd /etc/ld.so.conf.d
[root@vosprey2 ld.so.conf.d]# ls
bind99-x86_64.conf                   kernel-4.13.12-200.fc26.x86_64.conf  mariadb-x86_64.conf
kernel-4.12.11-300.fc26.x86_64.conf  kernel-4.13.12-300.fc27.x86_64.conf
[root@vosprey2 ld.so.conf.d]# cat mariadb*
/usr/lib64/mysql
[root@vosprey2 ld.so.conf.d]# ldconfig
[root@vosprey2 ld.so.conf.d]# /usr/lib64/nagios/plugins/check_mysql
/usr/lib64/nagios/plugins/check_mysql: error while loading shared libraries: libmysqlclient.so.18: cannot open shared object file: No such file or directory

I will have to yet again write custom plugins to replace the ones that have stopped working.

Posted in Unix | Comments Off on Upgrade to Fedora27 issues so far

Hackers are getting annoying

My web logs show quite a few sites are now appending the below string to GET query requests that take parameters, the string below has been appended to quite a few requests to my website by multiple ip-addresses.

or (1,2)=(select*from(select name_const(CHAR(111,108,111,108,111,115,104,101,114),1),name_const(CHAR(111,108,111,108,111,115,104,101,114),1))a) -- and 1=1'

Below are two addresses that were overzealous in doing so and were causing enough log activity for me to take action to block them to make my logs readable again.

[root@vosprey2 tmp]# nslookup 184.168.192.72
72.192.168.184.in-addr.arpa name = p3nlwpweb050.shr.prod.phx3.secureserver.net.

root@vosprey2 tmp]# nslookup 95.154.220.205
205.220.154.95.in-addr.arpa name = server.ambinet.net.

Just shows hackers are still randomly target any internet facing site, even personal ones.

Posted in Unix | Comments Off on Hackers are getting annoying