Another new project – Walmarts OneOps – Day 2

DNF should have a few more waring messages

Playing with OneOps. Installing the vagrant setup.

It needs VirtualBox.
VirtualBox needs kernel-devel.

dnf install kernel-devel
No problems.

But VirtualBox still failed to install the kernel driver.

The problem was…
since I installed the system there had been a kernel update in the repositories, the kernel-devel installed files for a version of the kernel that the system was not actually running, it was running on an older kernel.

dnf update
shutdown -r now

Then “/usr/lib/virtualbox/vboxdrv.sh setup” compiled the kernel module OK.

It would have been nice if DNF had installed the kernel source files for the current kernel, or warn it was installing kernel source for a version of the kernel that was not installed. It probably needs a dependency on the kernel.

Anyway, Day 2 and the current issues are…

Issue One, the OneOps doc is missing a vital parameter, following it gives the error below (see the update at the end of the post for the missing parameter)… oh, and you must be the root user to run this “vagrant up” command which I have an issue with.

[root@localhost ~]# cd ~oneops/setup/vagrant
[root@localhost vagrant]# vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Box 'bento/centos-6.7' could not be found. Attempting to find and install...
    default: Box Provider: libvirt
    default: Box Version: >= 0
==> default: Loading metadata for box 'bento/centos-6.7'
    default: URL: https://atlas.hashicorp.com/bento/centos-6.7
The box you're attempting to add doesn't support the provider
you requested. Please find an alternate box or use an alternate
provider. Double-check your requested provider to verify you didn't
simply misspell it.

If you're adding a box from HashiCorp's Atlas, make sure the box is
released.

Name: bento/centos-6.7
Address: https://atlas.hashicorp.com/bento/centos-6.7
Requested provider: [:libvirt]
[root@localhost vagrant]# 

Issue Two, the doco states that if it did start up OneOps will be running on http://localhost:3000, so even though making progress there is still the minor (ok major) issue that all the interfaces are bound to localhost which being a “server” has no X-gui to run a web browser.

So, to progress I have to not only find all the config files using localhost but find the config files selecting the “box”, browsing the”box” weblinks from another machine there is a bento/fedora-22 that mght work. This may be explained in the documentation, but as noted in the prior post as the documentation is also only served to localhost I am trying to navigate it with a text browser (lynx) which makes finding things difficult.

Steps to get to the current point since the Day 1 post

This is the final working steps to get to this point, using DNF to download where possible instead of blindly following the OneOps doc which takes you to vendor sites to download RPMs which wil not install without manually installing lots of dependancies, that are better hanndled automatically via DNF if possible.

su - root

dnf install vagrant

cat < < EOF > /etc/tum.repos.d/virtualbox
[virtualbox]
name=Fedora $releasever - $basearch - VirtualBox
baseurl=http://download.virtualbox.org/virtualbox/rpm/fedora/$releasever/$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://www.virtualbox.org/download/oracle_vbox.asc
EOF

dnf update                  # DO THIS to avoid conflicts with kernel-devel
dnf install kernel-devel    # needed for virtualbox
dnf install libvirt vagrant-libvirt  # used as a required driver for vagrant
shutdown -r now
dnf install VirtualBox-5.0
vi /etc/group
   Add the oneops user to the vcboxusers group that was created

su - oneops
git clone https://github.com/oneops/setup
exit     # (back to the root user)
cd ~oneops/setup/vagrant
vagrant up

The Day 3 post will probably not be until the weekend.
Other minor projects I need to tidy up will keep me busy until then.

Update 2016/03/08

Ok, the error from the “vagrant up” command was that there was no image for libvirt… that woukld be why OneOps install instructions wanted VirtualBox installed. The command to use that should have been in the OneOps documentation for using the vagrant setup is “vagrant up –provider virtualbox“.

That gets past the no libvirt image error and we move on to the next error as below.

root@localhost vagrant]# vagrant up --provider virtualbox
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'bento/centos-6.7' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
==> default: Loading metadata for box 'bento/centos-6.7'
    default: URL: https://atlas.hashicorp.com/bento/centos-6.7
==> default: Adding box 'bento/centos-6.7' (v2.2.5) for provider: virtualbox
    default: Downloading: https://vagrantcloud.com/bento/boxes/centos-6.7/versions/2.2.5/providers/virtualbox.box
==> default: Successfully added box 'bento/centos-6.7' (v2.2.5) for 'virtualbox'!
==> default: Importing base box 'bento/centos-6.7'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'bento/centos-6.7' is up to date...
==> default: Setting the name of the VM: oneops
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 3001 (guest) => 3003 (host) (adapter 1)
    default: 3000 (guest) => 9090 (host) (adapter 1)
    default: 8161 (guest) => 8166 (host) (adapter 1)
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["startvm", "45781759-297d-482c-9f70-9acbe0815c45", "--type", "headless"]

Stderr: VBoxManage: error: VT-x is not available (VERR_VMX_NO_VMX)
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component ConsoleWrap, interface IConsole
[root@localhost vagrant]# 

This is unfortunately probably because I am doing all this testing in a VM instead of on physical hardware.

So I am not sure if I want to take this any further at this time. The main reason for that is having ruby/rubygems installed on a system always causes problems with OS upgrades.
I do not really have the time to do a disk image backup, play with this, and do a disk image restore when finished playing… as this is my main ‘play/test’ system and it already has another four VMs running for three other projects that I have more need of, that I do not want interrupted.

So my curiosity about OneOps is now on hold until I have a window I can use a week to play and restore back to.

Posted in Unix | Comments Off on Another new project – Walmarts OneOps – Day 2

Another new project – Walmarts OneOps – Day 1

Since Walmarts “OneOps” platform has been open sourced and placed on github for the world to use I have added yet another project onto my TODO list.

From a quick read OneOps is designed to deploy VMs from a single management infrastrcture onto any cloud platform; Azure, Rackspace, Amazon or even a local OpenStack system… basically being able to redeploy to whatever is cheapest at the time.

It is unlikely to be usable by me unless I install it on bare metal as the minimum hardware requirements seem to be 16Gb of memory… but for testing the installation and configuration steps I have fired up a 4Gb VM to work out how to get the software installed, when I have that tested/documented I will use bare metal.

The first step for me was obviously to install the documentation. That is very briefly covered in the readme at the main documentation project at https://github.com/oneops/oneops.github.io. Did I mention very briefly.

Fortunately the Fedora documentation at https://developer.fedoraproject.org/tech/languages/ruby/gems-installation.html covered most of the steps needed to install ruby/gems.

The steps here are how to install the documentation, based on a merging of the references above. The commands need to be run as root of course.

dnf install ruby-devel
dnf group install "C Development Tools and Libraries"
dnf install redhat-rpm-config patch git
gem install jekyll
gem install redcarpet
useradd oneops
su - oneops
git clone https://github.com/oneops/oneops.github.io.git
cd oneops.githup.io
jekyll serve --no-watch

And then… you should be able to view the documentation using a local browser at http://127.0.0.1:4000/

And there is my first obvious problem. Netstat does indeed show a service listening on 127.0.0.1 port 4000… the key problem being it is listening only on 127.0.0.1 not on 0.0.0.0… the VM is a “server” with no desktop environment and it has no web browsers and because it is bound to localhost using a browser from another machine cannot read the doco !.

I installed my favourite text browser lynx onto the VM and confirmed the doco is being served correctly, but of course it is hard to follow the doc in a text browser.

Lots of grep on 127.0.0.1 and localhost failed to find where it was set.

So Day 2 to Day N will be trying to find where that is set so the documentation is network available via a gui browser rather than only available to localhost. If I cannot find that by the weekend I will install apache and “proxy” to the port, but I am trying to minimise software installs to just what is required.

So watch this space.

Posted in Unix | Comments Off on Another new project – Walmarts OneOps – Day 1

An interesting issue with SSH in F23

Don’t know when they introduced this gem, but I have been SSHing into servers for years, but I had a need to use DSA keys to SSH into a F22 server from a F23 one, and dsa keys just refused to work from a F23 client.

The F22 target server was happily accepting DSA key logins from my F17 desktop, so the issue was not on the target server but on the F23 client. And I’m pretty sure before I upgraded the client machine from F22 to F23 it was working.

(yes I know F17 is obsolete, but changes to either mencoder or ffmpeg in F18 broke most of my key video capture scripts, I tried F19, F20, F21 and had to bare metal restore back to F17 in each case to keep working; so I gave up trying to upgrade, my main desktop will be F17 for life)

Anyway, after generating new DSA keys and still having it fail I ran the ssh connection in verbose mode and it came up with this little gem.

debug1: Next authentication method: publickey
debug1: Skipping ssh-dss key /home/mark/.ssh/vmhost1_id_dsa for not in PubkeyAcceptedKeyTypes
debug1: Next authentication method: password

Now the not so fun bit was deciding where the hell that lived, a “grep -i PubkeyAcceptedKeyTypes *” in the /etc/ssh directory on the client found no occurrences of it. Even though I had already decided the issue was on the client I repeated the grep on the target with no matches found there either.

The way to get it working

On the F23 client I was already using customised config files, for most of you the key thing to know is that if you have a file ~/.ssh/config (the default filename) parameter values in there can override the default values for the ssh client session. As it is clear an upgrade to F23 changed the behaviour of SSH it is probably best to use a customised config file anyway.

I added an extra line in the ~/.ssh/config file as below

[mark@vmhost1 .ssh]$ cat config
IdentityFile ~/.ssh/vmhost1_id_dsa
PubkeyAcceptedKeyTypes ssh-dss

And ssh using DSA keys started workibg again from the F23 client.

Notes

  • Such a major behaviour change was probably documented somewhere, but who reads thousands of pages of package update notes
  • There was probably a very good reason that change was done, although if so it was done in a stupid way…
  • …if it was for any sort of security reason is would be done on the server side, not on a client

Perhaps this change is just an initial step to encourage people to change the target server configuration before the server defaults change to lockout DSA keys by default; which is unlikely to happen as there are probably thousands of client software packages that would break if that was done.

But the F23 ssh client default configuration has changed to prevent DSA keys being used for ssh-key logins, and the above will fix it. For F23 anyway, who knows what they will change next.

Posted in Unix | Comments Off on An interesting issue with SSH in F23

F21 to F23 upgrade, issue 2, OpenVPN behaviour changes

The way OpenVPM prompts for access has changed, they have started migrating it to run under systemd. The existing startup script I was using now fails.

[root@vpnserver init.d]# service openvpn start
[root@vpnserver init.d]# 
Broadcast message from root@vpnserver (Fri 2016-02-26 17:38:40 NZDT):

Password entry required for 'Enter Private Key Password:' (PID 1328).
Please enter password with the systemd-tty-ask-password-agent tool!

My first thought was there is probably a new service. So had a quick look, found one that was diabled, tried to enable it… and that must also be in development as all packages are fully installed but this service is not yet ready to run.

[root@vpnserver init.d]# systemctl status systemd-ask-password-console.service
● systemd-ask-password-console.service - Dispatch Password Requests to Console
   Loaded: loaded (/usr/lib/systemd/system/systemd-ask-password-console.service; static; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:systemd-ask-password-console.service(8)

Feb 26 17:38:40 vpnserver systemd[1]: Stopped Dispatch Password Requests to Console.

[root@vpnserver init.d]# systemctl enable systemd-ask-password-console.service
The unit files have no [Install] section. They are not meant to be enabled
using systemctl.

Anyway, back to my old buddy google.
Found this helpfull thread on the subject at https://sourceforge.net/p/openvpn/mailman/message/34319245/

As a result of that I found two methods that work to start OpenVPN again.

Method 1, that I have decided not to use yet

The first I will probably have to use after the next upgrade, inconvenient though it is. Strangely enough the origional error message stated exactly what to do.

The origional command can still be used

openvpn --cd /etc/openvpn --daemon --config /etc/openvpn/server.conf

But it no longer prompts for a password, so after it has gone daemon on the command line use the command

systemd-tty-ask-password-agent

That will prompt for the password and pass it through to OpenVPN.

The reason I have decided not to use that method yet is there can be many outstanding prompts, well not at the moment but as more services under systemd start using this method there will be. There is the command “systemd-tty-ask-password-agent –query” which I assume should list them all, it doesn’t at the moment, at the moment it is treated as an invalid response to the OpenVPN prompt.
Fortunately I don’t start OpenVPN from systemd but have always started it manually because of the password prompt, so when using the systemd-tty-ask-password-agent to get a enter password prompt I know there is only that one outstanding.

Method 2, force emulation of the origional behaviour

As this changed behaviour was identified as potentially causing issues the parameter –askpass can be used to force the old behaviour where the passphrase is requested before the OpenVPN server switches to daemon mode

By changing my origional command

`
openvpn --cd /etc/openvpn --daemon --config /etc/openvpn/server.conf

To be the command below

openvpn --cd /etc/openvpn --askpass --daemon --config /etc/openvpn/server.conf

The origional behaviour is restored and the passphrase is requested by OpenVPM when a startup script is run, without having to start a seperate far-too-long-to-type program to enter the password.

Summary

Thanks to google, and the mentioned helpfull post it found, my VPN server is working again, I will be using the second method. Although I have left a few comments in my startup script on how to use the first method in case that is the onjly option after the next upgrade.

Posted in Unix | Comments Off on F21 to F23 upgrade, issue 2, OpenVPN behaviour changes

F21 to F23 upgrade, issue 1, bacula upgrade

The bacula upgrade script fails for mariadb (and I assume mysql) databases that are running in the recomended ‘safe mode’.

The script /usr/libexec/bacula/update_bacula_tables mysql (basically runs update_mysql_tables) fails with the error
You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column

Fortunately it is the last step in the upgrade script, only the command below fails.

MariaDB [bacula]> UPDATE Version SET VersionId=15;
ERROR 1175 (HY000): You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column

And the workaround is simple

MariaDB [bacula]> SET SQL_SAFE_UPDATES = 0;
Query OK, 0 rows affected (0.00 sec)

MariaDB [bacula]> UPDATE Version SET VersionId=15;
Query OK, 1 row affected (0.04 sec)
Rows matched: 1  Changed: 1  Warnings: 0
                                                                               
UPDATE Version SETSET SQL_SAFE_UPDATES = 1;
Query OK, 0 rows affected (0.00 sec)

MariaDB [bacula]>quit

[root@server7]# service mariadb restart

Now, an issue would be that at no point does it prompt for a userid or password.

It did highlight a rather major point that I had omitted to set a root password for mariadb on my bacula backup server, all the users had one but forgetting root… well that is fixed now; have run the mysql_secure_installation script which should have been run ages ago.

Which tends to imply that with a password set on the root mariadb user the script would fail anyway.

But as bacula is the best free file level backup tool I have ever come across for linux I can live with the inconvenience of having it break every time a yum/dnf update upgrades it to a new version; but it is a pain.

Posted in Unix | Comments Off on F21 to F23 upgrade, issue 1, bacula upgrade

For Turnkey3 users, there is now a TK4- available

Another updated version of Tunkey3 called TK4- is floating about on the internet. The primary site seems to be http://wotho.ethz.ch/tk4-/. It is not the long awaited Turnkey4 but a version of Turnkey3 with a lot of connectivity enhancements implemented.

The doco says it is shipped as a full solution so users do not need to know anything about
MVS in order to obtain the extra SNA protocols.

And it is shipped as a full running system, unzip and go. For that reason if you have never gone through a Turnkey3 install which walks you through the steps to be taken to install a MVS3.8J system from scratch I would suggest you still install TK3 first just to see the steps involved, and to end up with a lot of very useful jcl decks; then use TK4-.

The main changes from TK3 seem to be

These highlights are based upon the documentation. I have not excercised any of them yet. Concentrating on making it look a bit more like my system for a hands off environment, or it would not be usefull to me.

  • JES2 remote print available, providing SNA LU0 and LU2; the SNA connectivity seems to be the hightlights
  • the supplied hercules binaries apparently have the tcpip interrupt enabled in them, but as the linux binaries do not run on a Fedora23 system (looks like thats because of a F23 packaging bug affecting shared libraries rather than an issue with the binaries [see end of post]) so I have no way of testing if the tcpip feature has actually been implemeted
  • it has TCAM and TP running
  • JRP printing to a 3270 is installed (see notes from running it below)
  • the CTC devices are in the IOGEN
  • it has a FTPD task, so presumably allows FTP (see notes from running it below)
  • RFE (review) is installed (as seen in screen shot in manual) as well as RPF
  • the CBT and MVSSRC volumes are optional, as MVSSRC volumes are needed for any serious coding they at least have to be added seperately

Changes observed from running it unmodified are

  • minor issue it has a FTPD proc so presumable should provide ftp services, however every attempt to start the proc results in a SOC1 abend; it may work with the supplied binaries (which do not run on F23) but because the F23 repo hercules binary does not have the tcpip feature implemented I do not expect it to work anyway
  • minor issueThe JRP printing to a 3270 doesn’t work. The appl can be logged onto OK and the function keys to set the device to accept printout all work as documented, but printing to the application gets IO errors. I am assuming the documentation for using it is wrong in this case. A minor issue as it is not really needed, just looks cool
  • a major issue with the supplied linux startup script,
    • As supplied using the provided startup script results in SPY showing no console log output, MVS command responses go to the combined hercules/MVS console log, plus the SPY version in TK4- doesn’t seem to issue JES2 commands; actually neither does IMON so it is possible JES2 responses are not shown in the combined log
      My solution is simply to use my existing TK3 linux headless startup script (available on my website of course) instead of the supplied script(s), which allows console displays in both SPY and IMON correctly
  • Catalogs: are not to my liking. The catalog SYS1.UCAT.MVS has the aliases for things like KLINGON and DUCHESS, going to have to do a lot of catalog cleanup, however
    the MVS alias has been moved to this user catalog from the master catalog so a lot of TK3 cleanup was obviously done before it got messy again… not that I am much better, my TK3 system has a seperate HLQ of GAMES and every game dataset is prefixed with that to keep things clean, likewise all compilers I installed are under SYS9.COMPILER where TK4- has them under unique prefixes like PL1 for example, making them hard to fine. But I have noticed I put my GAMES alias into my software source catalog instead of a meaningfull catalog myself so I am lazy as well :-)
  • File placement: same blasted bad design as is TK3, key files (such as SYS1 and SYS2 prefixed files) are on PUBnnn volumes. Means I cannot copy acrosss my PUBnnn volumes but will have to JCL backup/restore jobs to move my files across
  • big improvement: It must have a usefull apar/zap/usermod for syslog messages. If message logging to syslog is requested with a
    V SYSLOG,HARDCPY,STCMDS then the messages available to be viewed via Q for the syslog job are almost real-time; in TK3 messages were written to syslog a buffer-at-a-time so were normally quite a while behind; so this is a big improvement
  • using the SHUTDOWN command implemented into TSO when not using daemon mode (but still using the supplied scripts for manual start) causes a machine check. That TSO clist needs to be removed.
  • The default TSOAPPLS procedure used by the default TK4- TSO logon clists does something nasty I cannot figure out !. The error on the console is SYS9.CLIST is not available on dasd PUB0000. Yes SYS9.CLIST is one of my datasets added to the login proc not a supplied one but IDCAMS catalog listings clearly shows SYS9.CLIST is cataloged on SYS001 (and utils like RPF find it on SYS001 using dsname lookup). The error is below.

    IEC020I 001-1,SYSPROG,IKJACCNT,SYSPROC,270,PUB001,SYS9.CLIST
    

    It is something specific to that proc (and a tso listalc shows sys9.clist is allocated to tso ok from sys001, the tsoappls proc just cannot locate it). I changed the login proc for all my users not to use the tsoappls proc and they login just fine using sys9.clist; but from the ready prompt typing in TSOAPPLS gives the same error (other clsists, including those in sys9.clist work just fine). My solution is to discard that tsoappls proc for all users and just use my own

  • full screen help doesn’t work properly. On TK3 from within “ACCOUNT” typing ‘help add’ shows the help for the add command for account. On TK4- the same command gives a blank help screen and I must manually type ‘ACCOUNT’ into the member field and ‘ADD’ into the subcommand field to get the help screen. While that may seem a minor inconvenience it is actually a major inconvenience for commands I seldom use… however from the TSO READY prompt ‘help account’ shows the help for account ok… that is an issue as while it may be considered just a pain to have to exit the utility to get help for it in actuality account has its own help that would be displayed if the full screen help program was not installed. The help intercept works in TK3 but not properly in TK4-. Not an issue for most users, but I can see myself finding it irritating
  • it has the usermod to allow blank lines in assembler source files
  • it has the usermod to make consoles start in roll-delete mode; my headless startup script issued the commands for that anyway but it’s a nice to have for most people
  • big improvement: in TK3 replacing a program with a new version in a linklisted library required an IPL to pick up the changed program; must be a useful apar/zap/usermod in TK4- as the behaviour has changed, now any new copy is available immediately

The only thing that was really frustrating me in TK3 was trying to figure out how to iogen CTC adapters. As TK4- already has those iogen’ed in that is a good reason for me to keep it,
although I haven’t found time to play with the CTC adapters yet.

Changes I had to make to make it similar to my running TK3 environment

  • DASD:
    • copies of all my personal DASD volumes have beeen taken and attached in the TK4- config file, a critical requirement as they have user catalogs I will need
    • created a new 3350 volser MDTSO1 and restored all my personal datasets onto that rather than on PUBnnn volumes; so with future TK4 updates I can just move that volume across to keep all my work
    • I need the MVSSRC and CBT files, have attached those dasd volumes
  • Catalogs:
    • imported the MVSSRC and CBT catalogs
    • imported all my custom dataset catalogs
    • created a new TSO user catalog on the new MDTSO1 volume mentioned above, and added my user aliases to that so they will not be impacted by any tk4- refreshes I might download
  • Datasets:
    • backup/restore all datasets owned by my custom prefixes/aliases/users from TK3 to the new TK4- system
    • update the SYS1.PARMLIB members for my customised LINKLIST, APFLIST, VATLIST etc
    • remember those old lineflow pictures everyone used to collect, copied those across. I may have the only copies left
    • I have recreated all the GDGs I use so my batch that needs them will run
  • Usermods:
    • TK4- modified the TK3 ZUM0003 usermod for IEECVXIT with ZJW0006. I had to merge all the ZJW0006 changes with my changes and SMP them in as a SUP of that. Rollback (SMP restore) tested OK; but because of the way the usermods (and prerequsites) up until zjw0006 were just superceeding each other rollback must be to the base OS level, and all usermods needed to get back to the
      zjw0006 have to be reapplied… no biggee, fully tested and documented in my jobs
    • It gave me the chance to chance to change my SMP usermod to add additional console display commands from a full program replacement to a much cleaner ‘proper’ SMP job that just needs the few lines being changed, that implemented OK as well
  • Applications:
    • Added my MDDIAG8 and made sure HERCCMD is not in the linklist; I prefer source so prefer my program where possible. Will have to see what breaks (actually I’m not sure if HERCCMD was in TK4- to start with)
    • As part of including my dasd volumes and importing their catalogs I have all my SYS9 datasets available. MMPF, TAPEMAN3 and TASKMON needed for my automation needs are running ok
    • as I have installed every compiler available over the years, I will have to decide whether to re-install or copy those libraries as well; after finding out what is missing, as they are under various different dataset prefixes as well in TK4-. As an interum as all my SYS9 datasets have been imported that includes all my SYS9.COMPILER datasets so I haven’t lost anything while I try and see what TK4- provides
  • Security:

    • added the users I use via ‘account’, update the RAKF profile for the users and groups I use; all my users now aliased to the new catalog I created on new volume mdtso1 and rakf profile updated to allow them to catalog user files in that catalog only
    • deny all on the default PROD batch defaulted to by RAKF and add my much tighter batch rules
    • delete all the default users from RAKF profiles, left them in account for now until the next TK4- uodate in case the updates need them
    • TK4- PDF mentions RAKF DIAG8CMD, probably for HERCCMD, removed all rules for DIAG8CMD and replace with DIAG8 facility, as that is what my diagnose code eight program uses
    • TK4- has only 8 TSO terminals defined to VTAM. I need more, on my todo list
    • What NOT to do. This time when adding guest users via account I gave them an esoteric volume of GUEST.. which did exist by the way. For some reason they could NOT see (via TSO LISTC and the RPF dataset list options; tso listc gives catalog unavailable error) any datasets in my new SYS9.UCAT.TSOMID catalog (including their own prefixes)… BUT could quite happily lookup via catalog datasets in my other SYS9 user catalogs ????
      Anyway, redefined the guest users to account without a generic unit entry and they could access all catalogs again
  • As all my datasets were copied across and the catalogs imported my MMPF message automation STC is running OK and happily responding to messages, cancelling jobs, mounting tapes etc., plus the batch job scheduler is running ok
  • my startup/shutdown scripts can start/stop the environment under screen as I require, which allows the use of a working SPY and IMON console
  • added to the JES2PARM startup commands the command to log console activity to SYSLOG to it can be viewed via Q, and more importantly archived; syslog archiving is working
  • Changed the system SMF ID to what I use so I can swap it in with my live system
  • Found one CBT program I use that abends on TK4-, cleaned that up so it works on TK4-
  • One major issue I found the normal BSPPILOT SHUTDOWN, SHUTFAST and SHUTDOWN members in SYS1.PARMLIB are ALIASed to STSTD, SDSTDFST and SDSTDNOW. It’s probably a RPF issue but editing the normal members any TK3 user is used to editing (ie:SHUTNOW, which are aliases) is a waste of time as the changes made to an ALIAS member entry are not preserved over an IPL, you end up back with the origional member contents. A nasty trap for people used to the TK3 system… and I see no point in why sys1.parmlib was polluted with additional members if the origional names are expected to be used anyway
    This caused enough pain that I intend to remove all the aliases and rename the SDSD* members to the expected names when I get time to test the impacts; in the short term don’t edit the aliases, just the real members

Summary

TK4- (update 7) is stable, in that everything that works under TK3 can be made to work under TK4-, plus TK4- has a few new things for me to play around with.

I have replaced my TK3 system with TK4- as that has had no major detrimental effect on my use of it; apart from the minor detail it has a lot more virtual dasd volsers so my backup virtual tape pool ran out of scratch tapes, added more.

People new to hercules and Turnkey3 should start with Turnkey3 even if they immediately afterward switch to TK4-, simply because TK3 walks you through installing MVS3.8J and that is an understanding you will probably need at some point.

One issue I did have on an old CentOS6.0 VM running an old hercules RPM was that the CTC adapter entries in the hercules configuration file errored. Not an issue if you keep your version of hercules up-to-date but be aware that even though I recomend starting with TK3 that comes with a really old version of hercules, so if you start with TK3 you will need to update hercules before using TK4-.

Edits: 2016/02/18 added the new observed behaviour when a program is replaced in a linklisted library as that is a major improvement, for me anyway.

Update 2016/05/02, Fedora23 users are out of luck

TK4- comes with binaries for hercules that include the modification to implement the tcpip opcode instruction, that are expected to run on most systems. That may get FTPD working… however the binaries do not work on Fedora23… that appears to be a Fedora23 packaging issue however, the TK4- supplied linux hercules file complains about a missing library file (that is actually missing) but the package that is supposed to provide the missing file is installed… just without that key file that is supposed to be in the package.

mark@vmhost3 tk4-minus]$ hercules -f mark/marks.conf
hercules: error while loading shared libraries: libbz2.so.1: cannot open shared object file: No such file or directory
[mark@vmhost3 tk4-minus]$ 
[root@vmhost3 init.d]# find / -type f -name libbz2.so.1
find: ‘/run/user/1000/gvfs’: Permission denied
find: ‘/run/user/42/gvfs’: Permission denied

[root@vmhost3 init.d]# dnf provides libbz2.so.1
Last metadata expiration check performed 0:55:44 ago on Mon May  2 16:52:53 2016.
bzip2-libs-1.0.6-17.fc23.i686 : Libraries for applications using bzip2
Repo        : fedora

bzip2-libs-1.0.6-19.fc23.i686 : Libraries for applications using bzip2
Repo        : updates

[root@vmhost3 init.d]# dnf install bzip2-libs
Last metadata expiration check performed 0:56:43 ago on Mon May  2 16:52:53 2016.
Package bzip2-libs-1.0.6-19.fc23.x86_64 is already installed, skipping.
Dependencies resolved.
Nothing to do.
Complete!
[root@vmhost3 init.d]# 

It is not in the bzip2-devel package either, bother.

Posted in MVS3.8J | Comments Off on For Turnkey3 users, there is now a TK4- available

Using Diagnose 8 to issue hercules commands from Turnkey3 MVS3.8J

One of the things any Turnkey3 users eventually asks is how to issue hercules commands from the MVS3.8J guest OS, primarily to mount tapes. Generally asked by those users like me that have automated the environment startup to the point the hercules console is never needed, and in my case while it is accessable if needed it requires me logging onto the remote machine running my setup to access it; a pain.

Jay Moseley has in the FAQ section of his website covering obtaining and installing the binary only HERCCMD program which can be used to issue commands to hercules. I used that for quite a while and it works as documented. One important thing to note is Jays documentation says enabling diag8 for hercules allows host .sh scripts to be run which is only because older versions of hercules allowed that from the hercules console, that is no longer the case, later versions of hercules have a seperate configuration option that locks down host script execution; in case the defaults change you should explicity disable script execution before allowing the guest to use diag8 however.

However I’m always curious as to how things are done so just had to figure out what it was doing, and went to the effort of creating a program to do the same thing; the major benifit to me being that I like having the source code to things I use if possible.

The IBM manual GC20-1807-7 VM370 System Programmers Guide Rel 6.4-81 available on the bitsavers.org site has instructions on how to use the diagnose function operation eight to allow a guest VM to issue commands to the CP (hypervisor) to issue CP commands and obtain the responses to the commands.

In my environment (any TK3 environment) MVS3.8J is the guest and the hercules emulator is the CP the guest is running on.
`
The HERCCMD program performs checks for a security product installed to test for authorisation to the program. While the majority of Turnkey3 users will have no security product installed many will have taken the effort to install RAKF now it is available on CBT tapes at cbttape.org so I have included a RACF RACHECK check for authority to resource FACILITY DIAG8 (a custom facility easily handled by RAKF) to provide that function also. There is a toggle in the code that allows anyone wanting to remove all security checks to do so for Turnkey3 users without RAKF.

The code is not complex; it does however have to be assembled into an APF authourised library with AC=1 as it needs to switch into supervisor mode to issue the diagnose instruction.

If anyone is interested in it the latest stable version is available on github.

Minimal documenation and access to any later version not yet stable enough to pe pushed to github is available on my website, but basically it provides the same functions as HERCCMD, so if you have been using that or have looked at the documentation Jay provided in his FAQ for HERCCMD my program is a drop-in replacement.

Posted in MVS3.8J | Comments Off on Using Diagnose 8 to issue hercules commands from Turnkey3 MVS3.8J

Windoze10 updates frozen

After over a month of windows 10 updates being stuck at 15% complete, using a combination of wireless and physical ethernet cable (I thought maybe it would not do updates over wireless, but that was not what was stopping it) I thought I had better have a look at the issue.

Why leave it so long you ask ?. I have windoze10 as a dual boot image on a laptop that boots by default into fedora, that is my only non-linux system and I hardly ever boot it into windoze… only occasionally to test portability of some of my apps; so it was only an annoyance.

Anyway, looking into it it is not a new problem. I found a web post that discusses this issue and how to fix it for Win7, 8, 8.1 and 10… it is just another issue MS leave in the wild for decades. Another reason nobody should use windows.

Anyway the detailed procedure is in this post on stuck windows 10 updates.

I’m briefly highlighting the steps here as I have had issues with websites I have relied on with bookmarks for years sudeenly going offline just when I need them; but for the detailed howto use the link above if it still works.

  • as an administrator at the command prompt
    1. net stop wuauserv
    2. net stop bits
    3. net stop wuauserv (I added the second stop, the first fails)
  • Delete all files under C:\Windows\SoftwareDistribution

You will probably have to reboot at least once to finish deleting the SoftwareDistribution files.

The post expects you to have to restart the services, in win10 that just gets an error saying the service is already running so in Win10 at least they restart on reboot.

But anyway, the procedure works. After over a month at 15% complete, after following this procedure update status went back to 0% complete, then within an hour was at 100% complete and prompting for the install/reboot.

Windoze showed no errors in the update procedure, it was just hung and apparently would have hung forever.

I’m not saying I have never had issues like this in Linux, but as I have been using linux for over a decade fixing issues like this are so second nature to me I would probably never know I had an issue.

Windoze is supposed to be for people who do not need to know how to play with it’s internals; but seriously would your grandmother search the web for how to fix this ?… seems to me windoze users are not getting security updates as many will have “stuck” like mine did; making windoze an insecure platform.

Posted in windoze | Comments Off on Windoze10 updates frozen

Future personal posts have moved

This blog was getting rather full of personal posts when it should be filled with rambling thoughts on technology or software I was playing with, or complaining about.

So I have created a new blog under my personal section on this website for personal posts, found under the “My Personal Space” tab from the main website page of course… where this blog used to live, I have moved the tab for this blog to the main page but you have probably already discovered that.

So this should be the last entry that shows up if you view this blog by category using the “Home Life” category. Future personal posts will be in the new personal blog site https://mdickinson.dyndns.org/php/wordpress_personal/ where they belong.

Posted in Home Life, personal | Comments Off on Future personal posts have moved

Cheap EPUB Publishing possibilities

Well I got a new play phone to do android development with, but thats a long term activity and the smartphone will probably remain in it’s box for a long while yet; as for day-to-day use I still use my faithfull 2G calls only phone.

Having a smart phone I may actually unpack one day did remind me that I have been meaning to start changing all my documentation/manuals from PDF format to EPUB format at some point.

The bad news is fully EPUB3 authouring software (video extensions etc) is still in the horrible expensive commerciasl software only arena.

The good news is simple EPUB documents can be created using the worlds favourite free word publishing application, which of course is LibreOffice. That is done using extentions.

Some quick google searches showed me

  • Documents saved in native odt format can be converted to epub and mobi using calibre, lucky me I already have Calibre installed
  • LibreOffice has multiple extensions that can export to epub. Two candidates at the top of searches were
    • elaix, on the LibreOffice extension site
    • Writer2ePubi, which is an Apache OpenOffice extention on the OpenOffice extension site that is usable with LibreOfiice. Interestingly it does indicate Calibre will give a better conversion, which I assume is a gap they intend to try to close so watch this one

I will probably start with Writer2ePub as someone has taken the time to create a post on using Writer2ePub with LibreOffice so I know this extension is being used in the wild.

The best news for me is that all the static PDF documentation I will be wanting to convert to EPUB is… it has all been produced using LibreOfiice (ok the older ones with openoffice) so all the source odt/odf files are available to allow me to update the PDfs anyway.

The only difference in the way I do things will be that when I need to make updates to the manuals source document due to changes I have made in the programs they document, as well as the standard ‘export as pdf’ for the current website manuals I’ll also do an export as EPUB using one of the extensions.

In the short term whether I end up using either of the two extentions, or just use Calibre to convert the source odt bypassing the extensions alltogether, will just depend upon which produces the cleanest EPUB format needing the least manual cleanup.

So for simple EPUB books you can use LibreOffice. However as the EPUB3 standard allows for little things like embedded video and audio it is fairly obvious you will never be able to create complex EBooks using a word processor.

I will probably do it the hard way and script the building of ebooks. As while noted above all my static manuals are produced by LibreOffice now, but this website also produces a lot of dynamically created PDF files using bash scripts because I took the time to read the specs and work out how to do so on the fly without authouring software.
I will have to spend a bit of time reading the EPUB3 specs but as I eventually had a need to generate dynamic PDF files based on lots of different external sources I’m sure I will find the same need for EPUB files.
Plus using the extensions will probably still require cleanup using Calibre and I don’t want to have to use multiple tools to get a result.

But for the purposes of this post, if you have been using LibreOffice to create PDF documentation then using it to create EPUB format documentation is basically an extra button click if you use the totally unreviewed by me extensions.

If you want the bells and whistles like embedding video into yout ebooks it looks like it is still a totally expensive commercial software solution only at the moment.

Posted in Home Life | Comments Off on Cheap EPUB Publishing possibilities