Building/Linking and Running the Containers

This series illustrates the steps necessary to convert an automation portal to a linked, operational series of Docker Containers. The Introduction for this series is at Linked Docker Containers TOC .  The theory of operation of the automation portal is at Automation Portal.    The Model/View/Controller concept is a functional decomposition of the components of a portal app.  “Model” represents the code necessary to support a web framework such as FLASK or Django and functions for operating with a database or I/O. The View is of course the HTML necessary for a web portal. The Controller is the Angularjs code necessary to receive directives from the View, which in turn will access the Model for I/O operations. A MongoDB container provides configuration archive volume support.

Scope

This entry describes the steps necessary to build, link and run operational containers.

Build Link Run Sequence

First start MongoDB

Specify the image’s preferred name and the child directory location during the build

docker build —t mpate/mongo:MongoDB mongo_files

run a container instance of the image, specifying an instance name and the image’s repository:

docker run -it –name=MongoDB mpate/mongo:MongoDB

The MongoDB volume container will start with an ip address from the Docker’s default br0 network space. The specification of a container instance name becomes significant in the next steps.

Then start Controller Container

Specify the image’s preferred name and the child directory location during the build

docker build -t mpate/control:Cntl-link control_files/Docker_files

Run a container instance of the image

Specifying an instance name and the image’s repository.  Link the MongoDB container instance to this one; effectively ensuring:

  1. that host name of the MongoDB container is in the Controller’s /etc/hosts file
  2. that both Controller and MongoDB containers have unique ip addresses in the br0 network space 

docker run -it –link MongoDB –name=mpate-scripts.com mpate/control:Cntl-link

Then start the Model/View Container

Specify the image’s preferred name and the child directory location during the build

docker build -t mpate/model:Model model_files

Run a container instance of the Model image, specifying an instance name and the image’s repository.  Link the MongoDB container instance and the Controller container instance to this one; effectively ensuring:

  1. that host name of the Model container is in the Controller’s and MongoDB /etc/hosts file AND ​vice versa
  2. that both Model, Controller and MongoDB containers have unique ip addresses in the br0 network space 

docker run -it —-link MongoDB —-link mpates-scripts.com –name=MyContainerCl.com mpate/model:Model

Additional run time operations

 

  1. make “MyContainerCl.com” etc/hosts entry in the model’s /etc/hosts file because this hostname is how apache sees the proxy for the FLASK-Python app. 
  2. add /etc/hosts entries for the Model and Controller container hosts names, since the browser client will need to access both
  3. the above operations allow the web client browser to access the containerized portal when the web client resides on the host supporting the containers.
  4. Additional external port exposure steps will be necessary if a web client external to the container supporting host must access the containerized portal.  This expanded operation will be described in detail within the next blog, “Deploying Cloud Automation Pods with Kubernetes”

​In the final blog entry of this series

I described the execution of container(s) build-link-run as a manual process (that could be scripted).  This is in contrast to a purpose built method of accomplishing the same such as Docker-Compose.  I want to mention that either method is equally fine w/r to the development and test phases of containerized applications.

The pros/cons between Docker Compose and a scripted (example: script.sh) method include:

​Docker Compose       

PROs: maintained with upgrades                                   

CONS: development stage (only) tool set that must also be internally tested and maintained with each revision

Scripted build-link-run:

 

PROs: independent of developed-elsewhere code (besides Docker), based solely on common Docker commands and shell,

CONS: Hmmmm….  maybe build team doesn’t know shell? Has fewer dependencies? is that ever an issue?

​Neither Docker-Compose or scripted build-link-run methods are likely to be employed at the production phase.  A methodology supporting redundant clustered pods such as Kubernetes should be employed at production. Tools used between build-test-production cycles should be selected to involve the least amount of maintenance overhead possible.