Automation Portal in Linked Docker Containers


In my previous blog I described in detail a working cloud network configuration automation portal, based on the Model/View/Controller methodology.  The configuration portal included a MongoDB NoSQL backend to maintain a dynamic configuration data archive. The previous blog is at: Base Automation Portal


This blog will mainly describe the process for converting the separate applications of my earlier (mentioned) blog into a single cluster (pod) of linked Docker containers.  In essence, this set of linked containers will function as a non-redundant pod. The next blog series will illustrate creating a truly redundant Kubernetes set of 2 pods, based on the same automation portal as contained in this single pod. This blog will not describe much detail about the theory of operation of the automation portal.  For theory of operation details, please see the Automation Portal Theory of Operation referenced above.

Container Architecture

The automation app being converted to Docker containers should be subdivided into containers based on the following criteria:
1.        vertical scalability: do the processes targeted to a particular container scale with each other?  Does a utilization change to one of the processes leverage a similar utilization change to other processes also targeted for the same container?
2.        Maintainability: Are the processes targeted for a particular container clustered together by functionality such that when updating one process, the others could/should be simultaneously updated?

Portal App Containerization

Service separation can depend greatly on the degree of service decomposition (micro-segmentation) has been achieved. This simple application has basic components and processes have been separated into three containers, based upon the architecture principals above.
1.        Model/View Container: Model/Web Framework: The Model program code is based on Python and includes a RESTfull call to the base HTML. The Python code is integrated with FLASK as a web framework.  FLASK can function as a fundamental web server (port 5000), but has limited connection capacity.  Apache serves as a connection proxy for connection initiators to the Python/Flask App.  The following apps/processes are included in the Model/View container: 
1.        Python
2.        Python-pip
3.        pymongo
4.        FLASK
5.        wsgi mod for apache
6.        The base HTML page includes: 
1.        initial page view
2.        links to CSS and javascript files (most links point to urlsl in the Controller container)
3.        Angularjs directives, pointing to Angularjs methods in the Controller container
7.        Apaches Server
2.        Controller Container includes: 
1.        js file with all Angularjs methods matching the Angularjs directives in the base HTML page of the Model/View container
2.        all Bower related files and Bower dependency apps 
1.        Nodejs
2.        NPM
3.        Angularjs
4.        jquery
5.        bootstrap-CSS
3.        Apache server
3.        NoSQL (MongoDB) Container: includes the MongoDB service
Again; the theory of operation for all the components of the automation app are referenced in detail in the link at the top of this blog entry. There have been no changes to the operation of the application components. This developmental stage has served only to containerize and link the application components via Docker. The next blog entry will describe the Dockerfile associated with the Model/View Container and will describe modifications necessary for the Model/View components to support Containerization.