Running the container image Under Kubernetes (minikube)

This is a quick walkthrough of how to get the demo container image I provide to run under minikube as multiple mvs3.8j systems connected via NJE38.
Of course you could also run it as a single system without nje links if you wanted.

It does not cover all the possible options available to the container for using NJE38, for all the parameters available look at the running under docker page.

It assumes you already have minikube installed and running. If you have not, do not run this as root, use your own userid, it can be setup easily as below...

mkdir -p ~/installs/kubernetes/minikube
cd ~/installs/kubernetes/minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
mkdir -p ~/installs/kuberernetes/istio
cd ~/installs/kuberernetes/istio
wget https://github.com/istio/istio/releases/download/1.10.1/istio-1.10.1-linux-amd64.tar.gz
tar -zxvf istio-1.10.1-linux-amd64.tar.gz
/bin/rm istio-1.10.1-linux-amd64.tar.gz
cat << EOF >> ~/.bashrc
alias minikube="/home/mark/installs/kubernetes/minikube/minikube-linux-amd64"
alias istioctl="/home/mark/installs/kubernetes/istio/istio-1.10.1/bin/istioctl"
alias kubectl='minikube kubectl --'
EOF
source ~/.bashrc
minikube start --cpus 4 --memory 4096      # will download all the bits required
kubectl get pod -A                         # will install kubectl command into minikube
sleep 60
istioctl install                           # WILL PROMPT; reply y
sleep 60
kubectl label namespace default istio-injection=enabled
kubectl apply -f istio/istio-1.10.1/samples/addons/grafana.yaml
kubectl apply -f istio/istio-1.10.1/samples/addons/jaeger.yaml
kubectl apply -f istio/istio-1.10.1/samples/addons/kiali.yaml
kubectl apply -f istio/istio-1.10.1/samples/addons/prometheus.yaml

It should also be noted that the example yaml files to deploy the NJE38 linked containers used here do not cover setting an external ip for any of the 3270 ports you would use for TSO. If you know enough about minikube to set up the tunneling or an external ip for one of the containers by all means do so if you prefer to play with console commands from TSO; otherwise just kubectl exec into one of the containers and attach to one of the console sessions runninmg under screen to play with the NJE38 commands.

Also you would never use more than one replica (making K8s overkill) as the TSO session to your container must always hit the same one; and the NJE links are point-to-point, you just cannot have two systems pretending to be one. But as a proof of concept the containers can run under minikube in a nje linked configuration :-).

Important note on using Kubernetes (minikube) instead of Docker

Docker and Kubernetes initialise containers in completely different ways. To avoid the containers running this mvs38j image under Kubernetes going into 'CrashLoop' you must set the values runAsUser and runAsGroup to 6001.

Loading the image into minikube

MiniKube will not be able to resolve any local docker registries, so you must manually load the image into minikube image cache so it can find it when it performs the image pull request.

Ensure you pre-load the image into minikube cache before trying to start any containers, it must be in cache for minikube to find it. You load it into minikube not the hosts docker. (replace imagename.tar with whatever image of mine you downloaded).

gunzip imagename.tar.gz
minikube image load imagename.tar

A quick demo of running it under minikube

There are a couple of example YAML files for running the container under kubernetes. This is a quick walkthrough on using one of those to fire up a mvs38j multi-system environment to play with nje38 should you want to.

It shows you how to do it, you would never put a container like this under kubernetes in the real world as it can of course have a maximum of one replica so gains no benefit of all, and incurs a lot network configuration mucking and mucking about that is not needed in docker or physical machine installs.

Examples of Kubernetes(minikube) yaml deployment file

If you want to play with it these are my current yaml files to deploy under K8s (minikube) in configurations that match the docker examples for playing with nje38.

To get into the container to check out any issues you would 'kubectl get pod -n mvs38j-demo' and 'kubectl exec -ti XXX -n mvs38j-demo -c CCC -- /bin/bash' (replacing XXX with the pod name and CCC with the container name [mvs-mid1|3]).
Refer to the troubleshooting tips on the main page for this image on how to troubleshoot container issues.

To actually start a TSO session into one of the containers refer to the comments in the example yaml file for running a single instance, and adjust the command for which pod and system you wish to port forward to. Alternately you can exec into one of the containers and attach to one of the console screen sessions (or from within the container start a c3270 TSO session to localhost:3270).

##################################################################################################
#
# This file defines the services, service accounts, and deployments for a demo three pod
# mvs3.8j system in a 'fully interlinked' configuraion.
#  
#              MID3
#        (1190)     (1191)
#          /           \
#         /             \
#        /               \
#    (1191)            (1191)
#   MID1 (1190)----(1190) MID2
#   
#   kubectl apply -f k8_mvs38j.yaml
#
#   Debugging
#      kubectl get pods -n mvs38j-demo
#      kubectl exec -ti XXXX -n mvs38j-demo -- /bin/bash   # (XXX is a pod id of the above command)
#   
##################################################################################################

##################################################################################################
# Run in an isolated namespace
##################################################################################################
apiVersion: v1
kind: Namespace
metadata:
  name: mvs38j-demo
  labels:
    istio-injection: enabled
---

##################################################################################################
# Pod for first MVS3.8J container - Sysname MID1
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: mvs-mid1
  namespace: mvs38j-demo
  labels:
    app: mvs-mid1
    service: mvs-mid1
spec:
  ports:
  - port: 3270
    name: tso 
  - port: 1190
    name: nje-mid2
  - port: 1191
    name: nje-mid3
  selector:
    app: mvs-mid1
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mvs-mid1-svcacct
  namespace: mvs38j-demo
  labels:
    account: mvs-mid1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mvs-mid1-v1
  namespace: mvs38j-demo
  labels:
    app: mvs-mid1
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mvs-mid1
      version: v1
  template:
    metadata:
      namespace: mvs38j-demo
      labels:
        app: mvs-mid1
        version: v1
    spec:
      serviceAccountName: mvs-mid1-svcacct
      containers:
      - name: mvs-mid1
        image: localhost/mvs38j
        imagePullPolicy: IfNotPresent
        env:
        - name: CONTAINERNJENAME
          value: "MID1"
        - name: MVSREMOTENAME1
          value: "MID2"
        - name: MVSREMOTEIP1
          value: "mvs-mid2"
        - name: MVSREMOTEPORT1
          value: "1190"
        - name: MVSREMOTENAME2
          value: "MID3"
        - name: MVSREMOTEIP2
          value: "mvs-mid3"
        - name: MVSREMOTEPORT2
          value: "1190"
        - name: RUNNING_UNDER_K8S
          value: "YES"
        ports:
        - containerPort: 1190
        - containerPort: 1191
        - containerPort: 3270
        securityContext:
          runAsUser: 6001
          runAsGroup: 6001
---
##################################################################################################
# Pod for second MVS3.8J container - Sysname MID2
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: mvs-mid2
  namespace: mvs38j-demo
  labels:
    app: mvs-mid2
    service: mvs-mid2
spec:
  ports:
  - port: 3270
    name: tso 
  - port: 1190
    name: nje-mid1
  - port: 1191
    name: nje-mid3
  selector:
    app: mvs-mid2
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mvs-mid2-svcacct
  namespace: mvs38j-demo
  labels:
    account: mvs-mid2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mvs-mid2-v1
  namespace: mvs38j-demo
  labels:
    app: mvs-mid2
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mvs-mid2
      version: v1
  template:
    metadata:
      namespace: mvs38j-demo
      labels:
        app: mvs-mid2
        version: v1
    spec:
      serviceAccountName: mvs-mid2-svcacct
      containers:
      - name: mvs-mid2
        image: localhost/mvs38j
        imagePullPolicy: IfNotPresent
        env:
        - name: CONTAINERNJENAME
          value: "MID2"
        - name: MVSREMOTENAME1
          value: "MID1"
        - name: MVSREMOTEIP1
          value: "mvs-mid1"
        - name: MVSREMOTEPORT1
          value: "1190"
        - name: MVSREMOTENAME2
          value: "MID3"
        - name: MVSREMOTEIP2
          value: "mvs-mid3"
        - name: MVSREMOTEPORT2
          value: "1191"
        - name: RUNNING_UNDER_K8S
          value: "YES"
        ports:
        - containerPort: 1190
        - containerPort: 1191
        - containerPort: 3270
        securityContext:
          runAsUser: 6001
          runAsGroup: 6001
---
##################################################################################################
# Pod for third MVS3.8J container - Sysname MID1
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: mvs-mid3
  namespace: mvs38j-demo
  labels:
    app: mvs-mid3
    service: mvs-mid3
spec:
  ports:
  - port: 3270
    name: tso 
  - port: 1190
    name: nje-mid1
  - port: 1191
    name: nje-mid2
  selector:
    app: mvs-mid3
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mvs-mid3-svcacct
  namespace: mvs38j-demo
  labels:
    account: mvs-mid3
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mvs-mid3-v1
  namespace: mvs38j-demo
  labels:
    app: mvs-mid3
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mvs-mid3
      version: v1
  template:
    metadata:
      namespace: mvs38j-demo
      labels:
        app: mvs-mid3
        version: v1
    spec:
      serviceAccountName: mvs-mid3-svcacct
      containers:
      - name: mvs-mid3
        image: localhost/mvs38j
        imagePullPolicy: IfNotPresent
        env:
        - name: CONTAINERNJENAME
          value: "MID3"
        - name: MVSREMOTENAME1
          value: "MID1"
        - name: MVSREMOTEIP1
          value: "mvs-mid1"
        - name: MVSREMOTEPORT1
          value: "1191"
        - name: MVSREMOTENAME2
          value: "MID2"
        - name: MVSREMOTEIP2
          value: "mvs-mid2"
        - name: MVSREMOTEPORT2
          value: "1191"
        - name: RUNNING_UNDER_K8S
          value: "YES"
        ports:
        - containerPort: 1190
        - containerPort: 1191
        - containerPort: 3270
        securityContext:
          runAsUser: 6001
          runAsGroup: 6001
---

And here is an example of running it in the flat routed configuration.

##################################################################################################
#
# This file defines the services, service accounts, and deployments for a demo three pod
# mvs3.8j system in a 'flat routed' configuraion.
#  
#   MID1 (1190)----(1190) MID2 (1191)-----(1190) MID3
#   
#   kubectl apply -f k8_mvs38j.yaml
#
#   Debugging
#      kubectl get pods -n mvs38j-demo
#      kubectl exec -ti XXXX -n mvs38j-demo -- /bin/bash   # (XXX is a pod id of the above command)
#   
##################################################################################################

##################################################################################################
# Run in an isolated namespace
##################################################################################################
apiVersion: v1
kind: Namespace
metadata:
  name: mvs38j-demo
  labels:
    istio-injection: enabled
---

##################################################################################################
# Pod for first MVS3.8J container - Sysname MID1
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: mvs-mid1
  namespace: mvs38j-demo
  labels:
    app: mvs-mid1
    service: mvs-mid1
spec:
  ports:
  - port: 3270
    protocol: TCP
    name: tso 
  - port: 1190
    protocol: TCP
    name: nje-mid2
  selector:
    app: mvs-mid1
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mvs-mid1-svcacct
  namespace: mvs38j-demo
  labels:
    account: mvs-mid1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mvs-mid1-v1
  namespace: mvs38j-demo
  labels:
    app: mvs-mid1
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mvs-mid1
      version: v1
  template:
    metadata:
      namespace: mvs38j-demo
      labels:
        app: mvs-mid1
        version: v1
    spec:
      serviceAccountName: mvs-mid1-svcacct
      containers:
      - name: mvs-mid1
        image: localhost/mvs38j
        imagePullPolicy: IfNotPresent
        env:
        - name: CONTAINERNJENAME
          value: "MID1"
        - name: MVSREMOTENAME1
          value: "MID2"
        - name: MVSREMOTEIP1
          value: "mvs-mid2"
        - name: MVSREMOTEPORT1
          value: "1190"
        - name: ROUTABLE1
          value: "MID3"
        - name: ROUTEVIA1
          value: "MID2"
        - name: RUNNING_UNDER_K8S
          value: "YES"
        ports:
        - containerPort: 1190
        - containerPort: 3270
        securityContext:
          runAsUser: 6001
          runAsGroup: 6001
---
##################################################################################################
# Pod for second MVS3.8J container - Sysname MID2
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: mvs-mid2
  namespace: mvs38j-demo
  labels:
    app: mvs-mid2
    service: mvs-mid2
spec:
  ports:
  - port: 3270
    protocol: TCP
    name: tso 
  - port: 1190
    protocol: TCP
    name: nje-mid2
  - port: 1191
    protocol: TCP
    name: nje-mid3
  selector:
    app: mvs-mid2
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mvs-mid2-svcacct
  namespace: mvs38j-demo
  labels:
    account: mvs-mid2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mvs-mid2-v1
  namespace: mvs38j-demo
  labels:
    app: mvs-mid2
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mvs-mid2
      version: v1
  template:
    metadata:
      namespace: mvs38j-demo
      labels:
        app: mvs-mid2
        version: v1
    spec:
      serviceAccountName: mvs-mid2-svcacct
      containers:
      - name: mvs-mid2
        image: localhost/mvs38j
        imagePullPolicy: IfNotPresent
        env:
        - name: CONTAINERNJENAME
          value: "MID2"
        - name: MVSREMOTENAME1
          value: "MID1"
        - name: MVSREMOTEIP1
          value: "mvs-mid1"
        - name: MVSREMOTEPORT1
          value: "1190"
        - name: MVSREMOTENAME2
          value: "MID3"
        - name: MVSREMOTEIP2
          value: "mvs-mid3"
        - name: MVSREMOTEPORT2
          value: "1190"
        - name: RUNNING_UNDER_K8S
          value: "YES"
        ports:
        - containerPort: 1190
        - containerPort: 1191
        - containerPort: 3270
        securityContext:
          runAsUser: 6001
          runAsGroup: 6001
---
##################################################################################################
# Pod for third MVS3.8J container - Sysname MID1
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: mvs-mid3
  namespace: mvs38j-demo
  labels:
    app: mvs-mid3
    service: mvs-mid3
spec:
  ports:
  - port: 3270
    protocol: TCP
    name: tso 
  - port: 1190
    protocol: TCP
    name: nje-mid2
  selector:
    app: mvs-mid3
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mvs-mid3-svcacct
  namespace: mvs38j-demo
  labels:
    account: mvs-mid3
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mvs-mid3-v1
  namespace: mvs38j-demo
  labels:
    app: mvs-mid3
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mvs-mid3
      version: v1
  template:
    metadata:
      namespace: mvs38j-demo
      labels:
        app: mvs-mid3
        version: v1
    spec:
      serviceAccountName: mvs-mid3-svcacct
      containers:
      - name: mvs-mid3
        image: localhost/mvs38j
        imagePullPolicy: IfNotPresent
        env:
        - name: CONTAINERNJENAME
          value: "MID3"
        - name: MVSREMOTENAME1
          value: "MID2"
        - name: MVSREMOTEIP1
          value: "mvs-mid1"
        - name: MVSREMOTEPORT1
          value: "1191"
        - name: ROUTABLE1
          value: "MID1"
        - name: ROUTEVIA1
          value: "MID2"
        - name: RUNNING_UNDER_K8S
          value: "YES"
        ports:
        - containerPort: 1190
        - containerPort: 3270
        securityContext:
          runAsUser: 6001
          runAsGroup: 6001
---

Obviously if you do not care about playing with the nje links and only want to run a single instance and want to preserve your work you would just run a single container with a persistent volume assigned so you don't lose anything between failure/restart :-). As I'm still using docker and don't need any persistent data (as I use a on-host system for non-play needs) I have not really looked at that.