Running the mvs3.8j container to play with using NJE38

I created the current version of the docker container image after watching a youtube video by Moshix on installing and using NJE38 on MVS3.8J. Rather than start multiple mvs38j systems on different servers and customise nje38 on each (ok, I did do that first) it made sense to simply containerise a mvs3.8j system so I could easily fire up multiple systems on the same server to play with nje38. As each container only needs 80Mb memory and 0.3% cpu to run you can have a very complex setup on a simple desktop this way.

As such when running the container a lot of environment variables must be passed to the container to configure NJE devices and generate the NJE38 configuration used by the each mvs3.8j system.

It should be noted that this container has been updated to support the alternate paths available in the latest version of NJE38 (with the optional ROUTEVIAn_ALT=zzzzzzzz variables). I have added a considerations section at the end of this page discussing those as it can make things overly complicated; but then this container is for testing complicated configurations :-).

To configure and start NJE38 it expects at a minimum four environment variables to be passed when the docker container is started, '-e "CONTAINERNJENAME=xxxxxxxx" -e "MVSREMOTENAME1=xxxxxxxx" -e "MVSREMOTEIP1=xxx.xxx.xxx.xxx" -e "MVSREMOTEPORT1=nnnn"'. If provided these are used to customise the hercules configuration file with the new remote nje node name and ip-address, plus also used to customise and submit a batch job to the system after it has started to install the NJE38 configuration member with the new configuration and start nje38.
If the parameters are not provided no NJE devices will be configured at the hercules device configuration level (so nje38 cannot be used) which is the default in the simple startup examples above.

Obviously it is up to you to ensure there is another NJE capable system (or container) for it to talk to.

It is capable of additional configuration via environment variables to add extra nje links and routes, environment variables that may be used are groupings as below, the one restriction is that the first parameter group must be coded for any others to be correctly processed. Refer the the examples below to see how to use them; that will make more sense than just looking at the list below.

One more inportant thing. As NJE38 is not part of JES2 the CONTAINERNJENAME does not have to match the system id, which is important as it would be a pain to change the system id of a system in a container (start it, update jes2 parms and smfid, shutdown, clpa and cold start jes2... containers are supposed to start quickly (grin, this one takes 5mins already)) . Just make sure they are unique names for each node you provide these parms for.

Any given container can therefore have up to three direct links to other NJE nodes and is able to route to up to five NJE nodes behind those direct links. This should be more than enough to create a working 'play' environment to evaluate NJE38 for your use... of just create a huge mvs3.8j farm because you want to.

Running the docker image with a nje38 link defined, simplest form

After downloading the file, starting it in the simplest form (replacing the remote nje name with a 1-8 character name of your remote nje node and the remote ipaddress with that of your remote nje node [which must know about the container host ipaddr, port used, and that the nje name is defaulted to mid1]) is as below

docker run --memory=80m --cpus="0.3" --rm -d --name mvs38j1 \
   -p 3270:3270/tcp -p 1175:1176/tcp -p 3505:3505/tcp \
   -e "MVSREMOTENAME=MID3" -e "MVSREMOTEIP=192.168.1.179" -e "MVSREMOTEPORT1=1192" \
   localhost/mvs38j

Of course you need another mvs3.8j system (or container) at the target ipaddr listening on port 1192 with a nje38 link definition pointing back at the container with the containers node name.

And of course the more complex examples you have been waiting for

Three containers, with direct links to each other is relatively simple as long as you remember link devices used are 0090-0092 (on ports 1190-1192) so defining two links on each will use ports 1190 and 1191 within the containers that you need to map correctly with the docker port redirection syntax... and ensure the remoteportN sesstings you use refer to the external mapping :-). Note also we must bump up the 3270 port mapping as well as we still want TSO access into the containers and only one can own host port 3270.

Note that we use host networking. As the internal docker bridge network does not assign ip-addresses to the containers until they are created we cannot know what ip-addresses will be assigned, so we just use the host network (and host address).

#              MIDGW 
#        (1190)     (1191)
#          /           \
#         /             \
#        /               \
#    (1192)            (1194) 
#   MID1 (1193)----(1195) MID2
#
# Important: change 127.0.0.1 to the real ip of your host !
# Important: run 'firewall-cmd --permanent --zone=trusted --change-interface=docker0'
# and restart docker; if docker0 is not in the trusted zone containers will
# not be able to access external addresses.
#
docker run --memory=80m --cpus="0.3" --rm -d --name mvsmidgw -p 3270:3270/tcp \
   -p 1190:1190/tcp -p 1191:1191/tcp \
   -e "CONTAINERNJENAME=MIDGW" \
   -e "MVSREMOTENAME1=MID1" -e "MVSREMOTEIP1=127.0.0.1" -e "MVSREMOTEPORT1=1192" \
   -e "MVSREMOTENAME2=MID2" -e "MVSREMOTEIP2=127.0.0.1" -e "MVSREMOTEPORT2=1194" \
   localhost/mvs38j
docker run --memory=80m --cpus="0.3" --rm -d --name mvsmid1 -p 3271:3270/tcp \
   -p 1192:1190/tcp -p 1193:1191/tcp \
   -e "CONTAINERNJENAME=MID1" \
   -e "MVSREMOTENAME1=MIDGW" -e "MVSREMOTEIP1=127.0.0.1" -e "MVSREMOTEPORT1=1190" \
   -e "MVSREMOTENAME2=MID2" -e "MVSREMOTEIP2=127.0.0.1" -e "MVSREMOTEPORT2=1195" \
   localhost/mvs38j
docker run --memory=80m --cpus="0.3" --rm -d --name mvsmid2 \
   -p 3272:3270/tcp -p 1194:1190/tcp -p 1195:1191/tcp \
   -e "CONTAINERNJENAME=MID2" \
   -e "MVSREMOTENAME1=MIDGW" -e "MVSREMOTEIP1=127.0.0.1" -e "MVSREMOTEPORT1=1190" \
   -e "MVSREMOTENAME2=MID1" -e "MVSREMOTEIP2=127.0.0.1" -e "MVSREMOTEPORT2=1193" \
   localhost/mvs38j

Of course it is even easier to fire up the three containers and let the end two just have links to a gateway node and require routing for the end two to reach each other. It requires less links (and network ports) than having direct links to each node. Simple as long as you remember to map the ports correctly, remember within the container they are 1190-1192 but on the docker host you need to keep incrementing the host ports used in mapping to avoid conflicts.

#
#   MID1 (1192) ------- (1190) MIDGW (1191) ------- (1193) MID2
#
# Important: change 127.0.0.1 to the real ip of your host !
# Important: run 'firewall-cmd --permanent --zone=trusted --change-interface=docker0'
# and restart docker; if docker0 is not in the trusted zone containers will
# not be able to access external addresses.
#
docker run --memory=80m --cpus="0.3" --rm -d --name mvsmidgw -p 3270:3270/tcp \
   -p 1190:1190/tcp -p 1191:1191/tcp \
   -e "CONTAINERNJENAME=MIDGW" \
   -e "MVSREMOTENAME1=MID1" -e "MVSREMOTEIP1=127.0.0.1" -e "MVSREMOTEPORT1=1192" \
   -e "MVSREMOTENAME2=MID2" -e "MVSREMOTEIP2=127.0.0.1" -e "MVSREMOTEPORT2=1193" \
   localhost/mvs38j
docker run --memory=80m --cpus="0.3" --rm -d --name mvsmid1 \
   -p 3271:3270/tcp -p 1192:1190/tcp \
   -e "CONTAINERNJENAME=MID1" \
   -e "MVSREMOTENAME1=MIDGW" -e "MVSREMOTEIP1=127.0.0.1" -e "MVSREMOTEPORT1=1190" \
   -e "ROUTABLE1=MID2" -e "ROUTEVIA1=MIDGW" \
   localhost/mvs38j
docker run --memory=80m --cpus="0.3" --rm -d --name mvsmid2 \
   -p 3272:3270/tcp -p 1193:1190/tcp \
   -e "CONTAINERNJENAME=MID2" \
   -e "MVSREMOTENAME1=MIDGW" -e "MVSREMOTEIP1=127.0.0.1" -e "MVSREMOTEPORT1=1191" \
   -e "ROUTABLE1=MID1" -e "ROUTEVIA1=MIDGW" \
   localhost/mvs38j

So it is possible to quickly setup quite a complicated play network should you wish.

It is even easier with "docker-compose" as docker compose creates a private network for the project based on the projects directory name instead of needing to use the default bridge or host network so each container can be referenced by name and start listening from port 1190 under docker; do not try that with kubernetes however. For docker you could also manually create a bridge and assign the containers to it but this is not a docker tutorial.

Despite the fact docker-compose is not in most repostories and needs to be downloaded from github it is an easy way of playing with this container image. Here is an example docker-compose.yaml file. Note that it only opens one host port (3270) as this example is all you need to test nje38 routing.

# use 'docker-compose -f docker-compose.yaml up --detach'
#
# within the network created by docker-compose for this file
# the hosts can access each other by name so that is used in remoteip
#
# port 3270 will be opened for container mvsmid1 only so you can
# x3270 to that port to issue nje38 commands to verify connectivity
# to the other hosts as noted below.
#
# mvsmidgw has two links so will listen on 1190 and 1191 (devices 090 and 091)
# the other two have one link so will only listen on port 1190 (device 090)
# This emulates the below, mid1 can only contact mid2 via the gateway node.
#   MID1 (1190)<---->(1190) MIDGW (1191)<---->(1190) MID2
#
# From a x3270 session on mvsmid1 you can logon guest1/guest, 3.7 then O to console
#   /F NJE38,D NODES                will show link to midgw
#   /F NJE38,D ROUTES               will show route to mid2 via midgw
#   /F NJE38,CMD MID2 D NODES       will issue command on mid2 and show link to midgw
#   /F NJE38,CMD MID2 D ACTIVE      will show active jobs on system mid2
#   /F NJE38,CMD MIDGW D ACTIVE     will show active jobs on system midgw
#   /F NJE38,MSG MID2 GUEST1 HELLO  write a hello message to user herc01 on mid2
#   /F NJE38,MSG * GUEST1 HELLO     write a hello message to user herc01 on current host
# Commond stacking (cmd mid1 cmd mid2 d nodes) should not be used, will break if long responses
#
#   New in the new version of NJE38
#      /F NJE38,D FILES           spool file %used and maybe the files queued
#      /F NJE38,D nnn S           get info on queues file nnn from number list from D FILES
#      /F NJE38,C nnn             cancel/delete file nnn from spooldata file (nnn is number from D FILES)
#      Possible more, it does not have a help
#   New TSO commands (simplest for, manual on nje38 site (and provided with tk4rob))
#      TRANSMIT                   TRANSMIT node.userid DA(filename) [PDS|SEQ] (default SEQ)
#      RECIEVE                    RECEIVE nnn [volser(vvvvvv) [unit(uuuu)], nnn from 'nje38 d files'
#
# If you have access to the TSO ready prompt 
#   'nje38 
#
version: '2'

services:

  mvsmidgw:
    image: localhost/mvs38j
    container_name: mvsmidgw
    mem_limit: 80m
    cpus: 0.3
    environment:
      - CONTAINERNJENAME=MIDGW
      - MVSREMOTENAME1=MID1
      - MVSREMOTEIP1=mvsmid1
      - MVSREMOTEPORT1=1190
      - MVSREMOTENAME2=MID2
      - MVSREMOTEIP2=mvsmid2
      - MVSREMOTEPORT2=1190

  mvsmid1:
    image: localhost/mvs38j
    container_name: mvsmid1
    mem_limit: 80m
    cpus: 0.3
    ports:
      - 3270:3270
    environment:
      - CONTAINERNJENAME=MID1
      - MVSREMOTENAME1=MIDGW
      - MVSREMOTEIP1=mvsmidgw
      - MVSREMOTEPORT1=1190
      - ROUTABLE1=MID2
      - ROUTEVIA1=MIDGW

  mvsmid2:
    image: localhost/mvs38j
    container_name: mvsmid2
    mem_limit: 80m
    cpus: 0.3
    environment:
      - CONTAINERNJENAME=MID2
      - MVSREMOTENAME1=MIDGW
      - MVSREMOTEIP1=mvsmidgw
      - MVSREMOTEPORT1=1191
      - ROUTABLE1=MID1
      - ROUTEVIA1=MIDGW

The reason you would not use a docker-compose file like the above with kubernetes is that under K8s all the containers would be put in a pod where containers are part of the same network stack (rather than each container having its own port range the entite pod shares the port range), so only one of the containers could listen on 1190 and you are back to trying to map ports to avoid conflicts, you would really need to use a seperate pod for each container to get it to work.

A quick walkthrough of using the docker compose file is below.

For reference one source for NJE38 is https://github.com/moshix/nje38mvs which includes a usage manual you will need to find all the available NJE38 commands (use the v200m zip file); and you should look at the tutorial by moshix at https://www.youtube.com/watch?v=8_esBksImCg for a quick overview of it.
It is also important to note it is not implemented as part of JES2 but as a seperate started task with no interface to jes2, so JCL such as '/*ROUTE XEQ|PRINT|PUN nnn' are not supported.
It does allow for file transfer between hosts by batch job only and the recieving batch job must specify the DCB information for 'new' datasets (programs nj38xmit|nj38recv, reference:pages 18-19 of user guide); also files with records that are not 80 bytes need to be converted using xmit370|recv370 as wrappers around the nje38 utilities which also require dcb info coded. So file transfers are really only useable by scheduled batch jobs rather than by interactive users.
The latest version 2.3.0 now implements TSO XMIT and RECIEVE commands, the docker container has been updated to that version.. This is a wonderful piece of work to get it to the functional state it is in on the mvs3.8j OS it was never designed to run on, so worth installing and keeping an eye on the project.

Important notes and troubleshooting tips

I will not repeat those here. Use the same troubleshooting notes on the main page describing this image.

Considerations for using the alternate routing paths

You can get into a bit of a mess. It can get quite complicated, and the manual does warn it is possible to get messages going around in an endless loop if you are not careful.

But basically... if you make alternate paths available to routes you must make sure traffic also has a return path via alternate routes around whatever link is down, preferably avoiding any loop.

Another consideration is you cannot add routes to nodes that have a direct link. The direct link path will always be attempted, you can only route to remote nodes with no direct link (I have tried, adding a primary route to the direct link with a alternate route to that direct connected node via a second node does not work when the direct link is down).

The alternate routes are only useful if a node is completely down.
Assume: nodes GW1 and GW2 can route between CLIENT1---GW1---CLIENT2 and CLIENT1---GW2---CLIENT2, GW1 the primary route and GW2 the alternate.
If the link betweent CLIENT1---GW1 is down a command on CLIENT1 "F NJE38,CMD CLIENT2 D NODES" would as the primary link is down folow the path CLIENT1---GW2---CLIENT2 but if the link CLIENT2---GW1 is still up the response will go CLIENT2---GW1---[error path to CLIENT2 down].
As adding an alternate route for a direct link does not work on GW1 you cannot add an alternate route from that to CLIENT1 for a alternate return path via GW2 even if there was a direct link between GW1 and GW2.
However is node GW1 was completely down (or links to client1 and 2 stopped on GW1) the alternate path in both directions would be used and work as expected.

This is getting a bit too complex for examples, as examples depend on whether you are using docker, docker-compose, kubernbetes etc. But I have played with it enough to recomend you play with it :-).

Below is a snipit from one of my docker start commands with node links to two servers capable of routing in both directions.

   -e "CONTAINERNJENAME=HAWK" \
   -e "MVSREMOTENAME1=MIDGW" -e "MVSREMOTEIP1=192.168.1.187" -e "MVSREMOTEPORT1=1193" \
   -e "MVSREMOTENAME2=VMHOST3" -e "MVSREMOTEIP2=192.168.1.179" -e "MVSREMOTEPORT2=1193" \
   -e "ROUTABLE1=VOSPREY4" -e "ROUTEVIA1=MIDGW" -e "ROUTEVIA1_ALT=VMHOST3" \
   -e "ROUTABLE2=VOSPREY5" -e "ROUTEVIA2=MIDGW" -e "ROUTEVIA2_ALT=VMHOST3" \

VOSPREY4 and VOSPREY5 have routes to HAWK with the same primary route MIDGW and alternate VMHOST3 as defined for HAWK. Both MIDGW and VMHOST3 have direct links to HAWK, VOSPREY4 and VOSPREY5 (the port used in the above is 1193 as my 'gateway' servers already have other links on 1190,1191,1192). If either instance MIDGW or VMHOST3 are completely down routing does work as expected.
(Wierd node names you think ?. As I spin up containers all over the place for testing I have started using server names so I know what is where, just sticking numbers on the end where multiple containers run on one machine).