A container image to run mvs3.8j under hercules
If you want to play with mvs3.8j without installing anything this is your answer. The container images here I have used under both Docker and Minikube.
It is important to note there is a fully supported container image MVS/CE (community edition) which I briefly covered on the obtaining MVS3.8J page which may better suit your needs.
My container image here was primarily created to quickly spin up complex NJE38 networks to play with and is specific to my needs. I have also found it more convenient for testing disruptive things as stopping/starting a container to get back to a known state is easier than trying to remember if I had updated anything important in a shadow file on a system I have been using before deleting the shadow files.
The latest container image provided here is based on TK4ROB 20220914 with a container OS of Deb:12-slim.
It has all the DLIB volumes plus all the CBT volumes mainly because as noted above I use it for testing
things so need those but it does make it a large image.
OK, a huge image, it is now over 800Mb. But as I use this for testing I need all the MVS source
and SMP volumes. I could probably drop the four CBT dasd volumes when building the container
image, but have not done so at this time.
I have made a lot of changes to that base image,
the major changes I have made are covered here".
Remember that as this runs as a container any changes you make are not persistent, this should therefore only be used
to 'play' with the system. If you like what you see you should look at some of the persistent options such as
installing TK3 from scratch or using TK4- onto your system.
For your own use you could omit the --rm in the run command and be able to stop/start/restart the existing
container which will preserve changes you make as long as you do not delete the container; but the logs,
disk growth and printer output will gradually consume space.
You must use "docker" (or minikube) to run containers using these images there are documented issues (documented as podman issues) with using podman to run any container that needs network access as a non-root user.
These are not official distribution images for hercules, mvs38j or any of the utilities provided
by those tools. For all the source and applications on these images you
should always refer to the origonal distribution sites documented in the obtaining
hercules and obtaining an OS pages of this site.
I am not sure on the policy of redistributing hercules binaries but as TK3 and TK4- both do I am
happy to provide a working container rather than need a directory map to a host filesystem just to
locate the binaries and library files.
It is also important to note that you should wait until the container status has changed from "starting" to "healthy" (about 5mins) before connecting a TSO session to it. That is simply because the startup script has a lot of delays coded into it to ensure important things happen in the correct order. If you start a session too soon you may grab one of the 3270 consoles instead of a TSO session which of course would prevent the startup completeing correctly.
The docker container will run hercules with UID 6001, that is important to know if you intend to bind mount any of the filesystems. I personally like to build containers using UIDs outside the range of existing host system users, and add a user to that uid as needed on a host. This is to avoid the issue seen in most containers where they just use the default UID of 1000; meaning any user with that UID on the host can 'kill' any process started by the container and probably will if they do not recognise it as something they started (happens a lot).
The ports you will open at a minimum are 3270 for TSO and optionally 3505 if you intend to submit JCL to the socket reader. Additional ports are needed if you want to use NJE38 or ctcserver from other hosts but they are covered seperately.
The simplest way of using it, a standalone single mvs3.8j system
Running as a single system using Docker
The simplest way of starting the container is as below. Note that the '-e "MVSTZOFFSET=+1200"' option is totally optional, it just sets the timzeone offset from UTC time for the time to be used and displayed by the MVS3.8J system; the containers themselves are UTC. Thats a new environment variable I added in Feb2023 when I finally got annoyed at daylight savings throwing the hardcoded offset out; the default is +1100 for NZ standard time so you will probably want to override it.
docker run --memory=80m --cpus="0.3" --rm -d --name mvs38j1 \ -e "MVSTZOFFSET=+1200" \ -p 3270:3270 -p 3505:3505 localhost/mvs38j
Or if you want printer output on your local filesystem you can use the below, the example will put print output from the container into the host directory /var/log/hercprt structure.
mkdir -p /var/log/hercprt # top level, normal printer output here mkdir -p /var/log/hercprt/prt00e # for dev 00E pipe printer used by print class P mkdir -p /var/log/hercprt/prt00e/job # unique per JOB text and PDF output from print class P mkdir -p /var/log/hercprt/prt00e/stc # unique per STC text and PDF output from print class P # container userid is uid 6001 which will not exist on most systems # so directories must be traversable hence chmod 777 find /var/log/hercprt -type d -exec chmod 777 {} \; docker run --memory=80m --cpus="0.3" --rm -d --name mvs38j1 \ -e "MVSTZOFFSET=+1200" -p 3270:3270 -p 3505:3505 \ -v "/var/log/hercprt:/home/mark/hercules/tk4-minus/prt" \ localhost/mvs38j
This is what you would use if you just want to play with mvs3.8j and you can skip directly to the end of this page to download the image.
A quick walkthrough of using the docker image as a standalone mvs38j system is below. Note: since this video was created health checks have been added to the image so the initial state is "starting", when it is ready to use the state will be "healthy"; so you will know when it is ready to be used.
Running as a single system using Kubernetes (Well MiniKube anyway)
Obviously it is a lot more complicated to run the image under Kubernetes, and this has only been tested under minikube.
The major differences from a usability perspective in using minikube rather than docker are
- you can't easily bind mount to the host filesystem, so if you want printer output stored on the host for you to use/archive/print as available in the second docker example above you simply cannot without a lot of work (correction: using a NFS mount to put the output on is trivial to do, but a bit of overkill). Having all the output retained within the container filesystem can make it difficult to manage space
- the example shown here uses kubectl to forward the TSO port (locking up one terminal session); and does not provide the card reader port. You would really have to setup ingress ports and external ips if you wanted to run this for anything other than a short time
However if you already have minikube installed and don't want to install docker (although personally I would recomend only using minikube with the docker engine so you should have it) want to run it under MiniKube instead of docker this is an example yaml file for minikube with comments at the start showing how it can be used.
Important notes and troubleshooting tips
Please note the container will take around five(5) minutes to initialise, as the startup script for hercules and mvs38j
has quite a few delays in it to ensure things stabalise and start in the correct order so MVS console commands are issued by the
script to do such critical things as switch the consoles to roll-delete mode do not get issued until the system has actually
finished the IPL and made the consoles available (There are as of the latest image 'sleep' commands totalling 201secs
as well as the time taken to run each command).
You should wait at least five minutes before starting
a TSO session to the container to avoid accidentally grabbing one of the console sessions and causing system initialisation problems, if you connect to terminals 010 or 011 you have not waited long enough and have broken the startup. Note: a "docker container list" will show when it is "healthy", so simply do not try to connect while it is in a "starting" state.
You are welcome to give it more CPU resource but that doesn't make much difference.
Once the container is running you can login to troubleshoot it with a normal docker exec, I use something like the script below to make it easier than having to locate one out of many containers running on a busy system.
#!/bin/bash
containerid=`docker ps | grep mvs38j | awk {'print $1'} | head -1`
if [ "${containerid}." != "." ];
then
docker exec -it ${containerid} /bin/bash
else
echo "The mvs38j container does not appear to be running"
fi
Of course if running under minikube the exec command is different, an example of how to do that is in the comments of the yaml file for minikube example.
Troubleshooting notes: if you have issues starting the application log into the container with the docker exec command above (or the kubectl one if using minikube) and
- 'ps -ef | grep sleep'; if there is still a sleep task running under userid mark; just wait as startup is still running
- 'su - mark' and 'cd /home/mark/hercules/tk4-minus/log' then 'tail -10 hardcopy.log'. If there is anything in the log the system has started and should be available. If there is nothing in the log or the system is still not responding correctly do the next check
- still as user mark issue 'screen -ls' and check there are four detached screen sessions running; if there are not something went wrong in the startup sequence timeouts so stop/start the conatiner again
- if there are four screen sessions issue the command 'screen -r c3270A' to pull up the master console; if there is a message on it 'enter system parameters' then the sleep parameters used in the startup script are too low for your system (unlikely as I added an extra 30secs wait I needed for my heavily loaded system), you can either just hit the [enter] key or reply 'R 00,CPLA' and hit the [enter] key to complete the mvs3.8j startup. Use ctrl-d to detach from the screen session. If there was no message on the screen proceed to the next step
- use the command 'screen -r c3270B' to access the second system console, if there is a message on the screen 'enter system parameters' there was not enough of a delay between starting the two console sessions and the second has become the master. Just enter the commands in the previous step to start the mvs3.8j system. Use ctrl-d to detach from the screen session
- If there were no messages on either screen use the command 'screen -r hercules' to attach to the hercules interface itself. It it appears to be doing nothing enter the command 'ipl 148'. If the command is accepted go back to step four as you will have to manually enter the system parameters. If you have to perform this step your system is probably too heavily loaded to run this container properly... try adjusting the run command to use more than '0.3' of cpu resource.
- finally, if everything looked ok, still as user mark use the command 'c3270 localhost:3270' to start a TSO session from within the container; if that connects to a TSO session then your issue is firewall related and external to the container
MVS3.8J image system access notes
- TSO access is available on port 3270 on the host, if you used the port redirect '-p 3270:3270' shown above
- TSO user access is users GUEST1-GUEST3 using password "GUEST", plus from the Deb12 image onward I have left the HERCnn users available with their default access
- RAKF is used to lock down the system, to change anything in non-guest datasets you will need an admin user id, the system admin userid is user SYSPROG and password MVS38J
- from TSO GUESTn users can access the console using IMON option O, the admin users can also use spy, so you can issue console commands and see the responses from your TSO sessions and do not actually need to access the hercules screen of consoles from within the container exept for troubleshooting
- As well as the print classes mentioned above print class H or T is the held output class (actually any prrint class other than A,B,G,L,P,Z,8 are held output) which can be viewed through TSO using Q(queue) or RFE option 3.8, print class P will write each individual job as both a PDF and text output file via a pipe pronter script so it easier to find individual job output rather than having to search through the default append all jobs to a single printer output files used... to be useful you would want to map the prt directory to a local filesystem to access the output of course
Obtaining this docker image
The images providing this are docker image save files that can be loaded into docker (or minikube). To obtain the image and load it into Docker simply use the commands below.
There are three images available.
The F33 one has been tested on Docker (works on host OS's up to Alma8.7 and Debian11) plus under Mikikube; it does not work with a host OS of Alma9.2. This is just retained for machines with older host OS's.
This I think from memory is based on TK4- update 08; and it is NOT kept up to date anymore.
The Deb12 ones I will change without warning as these are the ones I try to keep up-to-date.
and they do have issues.
Also neither of the Deb12 ones have yet been tested with Minikube they should work with it.
The deb12 ones both contain TK4ROB (tk4sys) 20230914 release with my changes added.
It is the only one with Wallys ISPF
installed so you may want one of those just to use that.
But the tk4rob release is also a bit buggy, so those issues are of course in the container.
The main issues are RPF will abend a lot, but is needed to fix members the background utility used
by ISPF (RFE ?) corrupts sometimes during editing, so apart from that perfectly usable.
- docker_image_mvs38j_nje_f33.tar.gz (container OS is Fedora33, mvs3.8j is based on TK4- update 08 with my updates). Hercules is spinhawk. I no longer update this so it is missing fixes I apply to the ones I do use
- docker_image_mvs38j_nje_deb12_spinhawk.tar.gz (container OS is Debian12, mvs3.8j is based on TK4ROB update of 20230914, with my updates). Hercules is Spinhawk.
- Best to avoid, see the notes below
docker_image_mvs38j_nje_deb12_hyperion.tar.gz (container OS is Debian12, mvs3.8j is based on TK4ROB update of 20230914, with my updates). Hercules is Hyperion.
Notes: it seems when hyperion is compiled it obtains some hardware dependencies from the build machine (ie: compiled in a Debian12 KVM instance on my debian12 build machine will run in a container on that machine, but will crash in a container on another machine... the containers were run in debian12 KVM images, the qcow2 disk image replicated to the two different debian12 host machines, the same XML used to define the machines, and it ran on the build machine but crashed on the other). It was compiled from the latet github source of 30Jul2023 on Debian12.
Due to that issue you should use the spinhawk one, and I will probably stop building hyperion containers alltogether as I do need to move VMs between host machines
Download sizes 672150838 bytes obsolete docker_image_mvs38j_nje_f33.tar.gz 756252022 bytes Feb 3 17:30 docker_image_mvs38j_nje_deb12_hyperion.tar.gz 726990197 bytes Feb 3 17:31 docker_image_mvs38j_nje_deb12_spinhawk.tar.gz
Example install: then run with the commands right at the top of this page.
(Yes you must use wget, I did have a forward/proxy URL to my docker repository and people
immediately started using my webserver as an open relay, so disabled that. Only the static
not necessarily latest images are available here now).
wget https://mdickinson.dyndns.org/hercules/downloads/docker_image_mvs38j_nje_deb12_spinhawk.tar.gz gunzip docker_image_mvs38j_nje_deb12_spinhawk.tar.gz docker image load -i docker_image_mvs38j_nje_deb12_spinhawk.tar # or for minikube see the Kubernetes notes as there is more to it than 'minikube load image.tar'
One important note: by default everything example here looks for image name localhost/mvs38j:latest so you may want to also do a "docker image list" after loading it, note the imageid and use "docker image tag imageid localhost/mvs38j:latest" (if not set that way already) to make it easier to use if cut/pasting any of the example scripts.
More advanced uses of this container image
There are a lot of additional container environment variables that can be used to configure and start additional features. If you are interested in those, which is probably teh only reason you would look at my container rather than the community edition one, the additional features avaliable in my container are covered below.
Playing with NJE38
I created my first MVS3.8J container after watching a youtube video by Moshix on installing and using NJE38 on MVS3.8J.
Rather than start multiple mvs38j systems on different physical or VM servers
and manually customise NJE38 on each (ok, I did do that first) it made sense to simply containerise
a MVS3.8J system so I could easily fire up multiple copies on the same server to play with
different configurations of NJE38 just by changing container environment variables.
Complex NJE38 networks could be thrown up/down easily this
way with the NJE38 links and routes.
As each container only needs 80Mb memory and 0.3% cpu to run you can have a
very complex setup on a simple desktop this way.
I have been playing with two ways of doing it; documented on their own seperate pages linked to below to avoid making this page overly complicated.
- Using Docker which is the easiest and works perfectly. I have details and examples of using docker to run this image as a multiple NJE connected systems should you want to play with NJE38
- Using Kubernetes (MiniKube) to run the images as multiple working NJE systems, a bit more complicated to setup. There is a walkthrough of running the images under MiniKube
CTCSERV using the CTCE tcpip interface
Definately not my work, I would not have the patience. But brilliant work. This can be considered an undocumented feature of the container, in that by default it is not used and I need to revisit it so it starts under a userid yet to be added to my system with virtually no access, and make it do something I may find useful.
This was placed into the container after watching another of Moshix's videos for a telnet server for mvs3.8j. A slightly mistitled video as its not a telnet server, but it does provide another interface to mvs3.8j.
Refer to the documentation on how to configure the CTCSERV process, what it is, and where to obtain it, to decide if it is something you may want to play with.