Kubernetes employs Docker resources (Dockerfiles) and provides scalability so that Docker containers may be clustered and support high availability. The same application as deployed as a virtualized service in the previous tutorial “Portal Automation Case Study” and as linked containers in “Migrate Previous Portal Automation App into Linked Docker Containers” is deployed as a Kubernetes cluster in this tutorial.
The target model for deploying the application within the IBM Cloud Kubernetes Service/Deployment method follows. Within each POD, Model App, Controljs and NoSQL are each independent Docker containers.
Limitations of the Initial IBM Cloud Example Deployment
The viewers may deploy the author’s application in IBM Cloud without charge for educational purposes as long as the deployment is within a single compute node and a limited trial period. For this reason, the author presented Kubernetes files for the application as:
- a Kubernetes Service
- Within a Kubernetes Deployment
- Within a single Pod
- within a single node
In this fashion, viewers may deploy the same application for testing purposes, free of charge. Scaling to multiple pods in the same node is also simply configured within the Kubernetes Yml spec:replicas declaration and this extension is also free of charge.
The Service/Deployment as implemented directly from the author’s Kube cluster config and Yml files will implement the following components:
Kubernetes networking operates within two modes; intra-Pod and externally exposed. These two modes are both employed within the author’s App implementation and are described below:
Intra-POD communications is accomplished via direct I/O between containers, each using the Node’s localhost, but also using unique, exposed container ports. As explained in the MVP_App folder, the Model component in the MVP architecture is responsible for I/O. In this capacity, I/O between the Model (Python/Flask) App and the NoSQL (MongoDB) containers is intra-POD.
To facilitate intra-POD communications between the Model container and the NoSQL container; the Model container exposes port 80 and the NoSQL (MongoDB container in the author’s example) container exposes native Mongo port 27017.
The POD container definitions within the Kubernetes Yml file follows. The intra-POD communication ports are defined and allowed as shown in the Yml file.
spec: containers: - name: mongodb image: registry.ng.bluemix.net/mj1pate/mongo:MongoDB imagePullPolicy: Always ports: - containerPort: 27107 - name: control image: registry.ng.bluemix.net/mj1pate/control:Control imagePullPolicy: Always ports: - containerPort: 8081 - name: model image: registry.ng.bluemix.net/mj1pate/model:Model imagePullPolicy: Always ports: - containerPort: 80
The addition of the following Kubernetes Yml file code causes a (IBM Cloud Node’s) localhost entry to occur in every container. This enables hostnames references between containers, for intra-POD communications.
apiVersion: apps/v1beta1 kind: Deployment metadata: name: tool-portal-pods spec: replicas: 1 template: . . . . hostAliases: - ip: 184.108.40.206 hostnames: - e.2a.3da9.ip4.static.sl-reverse.com
Kubernetes Managed Hostfiles
An example of a Kubernetes managed hostfile (from the author’s Model container) follows:
# Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 172.30.163.15 tool-portal-pods-6cf4658577-bkjm7 # Entries added by HostAliases. 220.127.116.11 e.2a.3da9.ip4.static.sl-reverse.com
External POD Access
A graphic of external access to POD containers via the Kubernetes Service object as configured in the author’s example follows:
External networking is necessary for two instance types:
Client to Model’s App Portal
The client browser will pull the the initial view from the Model’s (Python/Flask) app portal. The Model container exposes port 80. The Python/Flask app must be accessible from the outside (client browser), so the Kubernetes Service object proxies the Model’s port 80 as an external port 30000. From the Kubernetes Yml file:
apiVersion: v1 kind: Service . . . ports: - name: model protocol: TCP port: 80 nodePort: 30000
Client Portal to Control’s js files
Once the client browser loads they initial HTML from the Model’s portal, the browser must load js files as referenced by the HTML code. The js is externally accessible via the Control container. The Service object makes the js files accessible via proxying the Control container’s exposed port 8081 as external port 30100. One might observe that the js files need not necessarily be located in a POD container, since the js files could be centrally located in a GIT (or any) public repository. However, maintaining the js files within the same POD as the Model, enables somewhat simplified app update rollouts and revisions. The Service object’s proxy in the Kubernetes Yml file for the Control container follows:
apiVersion: v1 kind: Service . . . - name: control protocol: TCP port: 8081 nodePort: 30100