The puppet VM tutorial environment is available from https://puppet.com/download-learning-vm as an OVA file for VMWare or VirtualBox.
The good news, once the disk image is extracted from the .ova file and converted from vmdk to qcow2 using qemu-img the resulting qcow2 disk can be run under kvm simply by launching it from virt-manager using that existing disk, with one proviso, give it a minimum of 4Gb of memory, trying to run it in a VM with 3Gb of memory will eventually just lock it up.
A good intro, but with a few minor. Not all the “lab” examples in later sections work, they say 100% success in applying the manifests/classes/profiles/roles according to the outputs, but none of the services actually get started no matter how many time I restarted the affected quests, so all the “curl” test commands in later sections fail with nothing listening on port 80 on any of the test instances (it is possible to ssh into the instances to confirm that). But as a introduction to puppet it is very useful.
Either Puppet-Enterprise doesn’t offer much extra in the way of functionality or the training VM concentrated mainly on puppet-core. What it covers that is missing from puppet-core is the “puppet job” command to initiate jobs for nodes/applications from the puppetserver machine. Oh and the web interface, it covers setting up a new user on the web interface (do that step, having that is usefull to look at the reports from job runs to see what the errors are), but I didn’t really play with the web interface other than looking at the ‘job run’ error reports and the tutorial coverage on it is pretty much just setting up that new user.
One of the key things learnt is that the “puppet parser validate …/class/manifests/xxx.pp” command is of limited function, it syntax checks but does not check dependencies. In using the puppet learning VM I mistyped a ‘class xxx::submodule’ name, although the pp filename was correct. The parser validate command had no errors in that or the init.pp that refered to the class file… so I guess it just checks the include file refered to in the init.pp file exists (if that, it may just syntax check). The –noop test on the agent flagged the error when the manifest was used.
The “puppet job” command used in the PE tutorial seems reasonably useful, but as it is not available in the free puppet core package I have skipped over that, other than noting I will probably have difficulty testing application deployments (although as puppet core does support the “puppet parser validate –app_management” command I suppose applications may be supported ???, without the “job run –application” command available I’m not sure how the agents would sort the dependenicies out). Anyway, I don’t really have a need for orchestrating an application across multiple servers at home so that is not an issue for me.
The “defined resource type” section I am still having trouble with in that nobody would ever use the example in the real world and I am having trouble thinking of where it could be used. The example adds (ensures they exist) users… err/hmm/what?, auditor field day !. A poor security admin could try deleting users off a server but puppet would put them back again. But I understand why it was used as an example as quite honestly I cannot thing of any other use for a “defined resource” either; which is why I think I will have trouble remembering the concept. But the example works and shows how it functions anyway. I cannot think of anything I can use that functionality for at the moment anyway.
The application orchestrator section examples define an application with hard coded ip-addresses, I will have to spend some time looking at that to see if it can be changed to use ip-addresses provided by facter; I’m sure it can or the ability to orchestrate applications onto new VMs would be pointless. But as noted above with puppet core not providing the “job run” function to deploy applications I’m not sure that will be useful to me anyway… especially as for me I cannot see the point in creating an application stack with empty databases.
Anyway, after running through the tutorial VM I have managed to
- split my working ‘live’ nrpe manifest file into multiple ‘functional’ pp files under the manifest
- managed to use a template to recreate my existing bacula-fd configurations on all the servers, so can use puppet to install bacula-fd on new servers now
- used the example motd configuration to have a consistent (if unused) motd file on all my servers
- have used puppet to push out a “standard” configuration file and standard prelogin banner for sshd, however a “notify => Service[‘sshd’]” throws up an error that service sshd is undefined, so on each server you have to “systemctl restart sshd” or “service sshd restart” manually so it cannot really protect against unauthorised changes for that subsystem, which is weird as it is on all *nix servers
- created a “allservers” role and used the role to deploy the four manifests instead of four include statements for the node(s)
- I am still using only the “default” node entry, with a named node entry only when I want to test a new module; as currently the only real use I have for puppet is keeping configuration files in sync, although it is nice to know by using the “default” node any new VM I spin up will have nrpe and bacula-fd available for my backup and nagios servers to use
In the “Afterword” section of the tutorial is a link where Puppet-Enterprise can be downloaded for free use on up to ten nodes; as I expect my VM farm to exceed that at some point I will not bother with that.
The tutorial VM covers puppet in enough detail to make it fairly easy to use, so if you are looking at puppet you should download it and give it a try, which you can do as it can be run under KVM.
It has given me enough insight to convince me I should continue using the free puppetserver from puppetlabs, but mainly to ensure all KVM machines have a common set of scripts and basic system utilities configured. As I do not build that many new KVM machines I won’t have a need for using it for installing/deploying onto new KVM machines. And of course where I do throw-up/tear-down test machines at a frequent rate in my little openstack lab I use heat patterns to build the short lived application stacks needed for multi-server deployments for whatever I am breaking, er I mean testing :-).