Cloud-Native Architecture Part II Container Principles

Shivakumar Goniwada Rudrappa
8 min readFeb 13, 2021

The Cloud-Native is used to describe applications designed to run in cloud infrastructure. the cloud-native applications are developed by using the microservices architecture principles and deployed in the containers. The microservices are bound to fail and serve customer requests and scale for demands even though the underlying infrastructure is experiencing outages. To provide such capabilities to your clients, the microservices based on cloud-native application imposes a set of principles. these principles ensure that the microservices confirm to certain constraints and allow your application to automate and the management of your containerized application.

This article illustrates few principles that containerized applications must comply with to serve better to consumers.

almost all applications in modern-day architecture use the containers to run it. to create your application containerized to automate orchestration effectively by using Kubernetes requires some additional configurations. these principles are created by inspiring 12 factor APP.

High Observability Principle (HOP)

Observability is a measure of how well internal states of microservices can be derived from external outputs. The concept of observability was introduced by Rudolf E. Kalman for linear dynamic systems.

The observability states “An application said to be observable, one can determine the behavior of the entire application from application output.”

Logs, metrics, traces, liveness, readiness, and process health are known as pillars of observability in cloud-native architecture. While plainly having access to these pillars doesn’t make your application more observable but you need to create interfaces to access these pillars for further analysis.

Containers provide a unified way of packaging and running microservices by treating the application as a black box. You need to configure containers with APIs to access runtime environments to observe the container health and act accordingly. These are the prerequisite for automating container updates and lifecycles in a unified way, which in turn improves the system’s resilience and user experience.

You need to design your container and application with APIs for the different kinds of health checks. The microservices should log events into the standard error (STDERR) and standard output (STDOUT) for log aggregation by using tools such as FluentD, Logstash, Nagios, etc, and integrates with tracing and metrics-gathering libraries such as Zipkin, open tracing, etc.

At runtime, your application is a black box to you, implement necessary APIs to help the platform observe and manage your application in the best way possible.

Lifecycle Conformance Principle (LCP)

The LCP states “a container should have a way to read the events coming from the platform and conform by reacting to those events”

All kinds of events are available for managing platforms that are intended to help you to manage the lifecycle of containers and microservices, based on all type of available events, it is up to you to decide which events to handle and whether to react to those events or not.

By looking into all sort of events you need to pick important events, for example

  • Clean shutdown process
  • Terminate message (SIGTERM)
  • Forceful shutdown (SIGKILL)

When you issue a docker stop command, the docker will wait for 10 seconds to stop the process, if there no action in 10 seconds, then will forcibly kill the process. .

Command to stop the process: $docker stop Container A

The docker stop command attempts to stop running the container by sending a SIGTERM signal to the root process in the container, if the process hasn’t exited within the timeout period a SIGKILL signal will be sent.

Command to kill the process: $docker stop Container A

There are other events such as PreStop and PostStart, these might be significant in your application lifecycle management. For example, some applications need to warm up before service request, and some need to release resources before shutting down clearly

In this configuration file, you can see how to use PostStart and PreStop command to write a message file to the container’s /usr/share directory. The presto command shuts down nginx gracefully.

Single Concern Principle (SCP)

In many ways, the SCP is like the single responsibility principle from SOLID, which advises a module or class must have only one responsibility.

In a cloud-native architecture, SCP highlights concern as a higher level of abstraction than responsibility. The single concern enables every microservices and the container must address a single concern.

The main motivation for the Single Responsibility principle is to have a single reason for a change, the main objective of the SCP is container image reuse and replaceability. You can create a container that addresses a single concern with the common feature, then you can reuse the same container image in the different applications without modification and testing.

The SCP principle objective is to every container must address the single concern with microservices architecture style. Always use a single concern in the container even though your microservice provides multiple concerns. If you have microservices with multiple concern, use sidecar and init-containers (check on pattern chapter) to combine multiple containers into single deployment unit (pod), where each container still hold single concerns. You can swap a container that addresses the same concern. For example, replace the Service A container with the Service C by using Infra-as-a-code.

Image Immutability Principle (IIP)

IIP states an image unchangeable once it is built, and requires creating a new image if changes need to be made. Container applications like microservices are meant to be immutable. Once you developed applications, they aren’t expected to change between different environments except for runtime data like environment configuration and variables such as listening port, runtime options, etc. You need to store configuration and variables external to the container. For each change in image, you need to build a new image and reuse it across various environments in your development lifecycle.

Immutability makes deployments safer and more repeatable. If you need to roll back, you simply redeploy the old image. This approach allows you to deploy the same container image in all your environments. Containers usually configured with environment variables or configuration files mounted on a specific path. You can use Secrets and Config Maps to inject configurations in containers as environment variables or files of Kubernetes. If you need to update a configuration, deploy a new container (based on the same image), with the updated configuration.

Immutability is one of the best qualities of container-based infrastructure. Immutability along with statelessness allows you to automate deployments and increase their frequency and reliability

Process Disposability Principle (PDP)

PDP is a container runtime principle and states applications must be ephemeral as possible and ready to be replaced with container instances at any point of time by using Infrastructure-as-a-code.

Usually, you may not replace containers regularly but for few circumstances such as

  • The container doesn’t respond to a health check
  • Auto scaling down the application with CPU utilization or load
  • Migrating the container to a different host
  • Platform resource starvation

If you store the state within the container then it is very difficult to replace in a distributed environment, therefore, you should keep their state externalized or distributed and redundant.

The below use case illustrates how the PDP applied.

At the beginning of your day, Service A has only one container instance as the day progress and load increases, and containers auto-scales to three instances of container to meet the demand. The container instances dispose gradually as and when load decreases and finally it reaches the original state. This can be achieved by using the PDP.

You need to follow best practices on size of container and functionality of microservices, it is better to create small containers, this leads quicker start and stops because, before the spin of the new container, the containers need to be physically copied to the host system.

Self-Containment Principle (SCP)

The SCP is addressing the build time concern and the objective of this principle is container must contain everything that it needs at build time. The container relies on the presence of the Linux Kernel or Windows Silos and any additional libraries. The Windows Silos is a Microsoft variant for the Linux namespace. With Silos, Windows kernel objects such as files, registry, and pipes can be isolated into separate logical units.

Along with the container Linux or Silos, the following list to be added at the time of build

  • Dependent libraries
  • Language runtime
  • Application platform
  • ….

Some of your applications required multiple container components. For example, your containerized microservices may also require a database container. This principle does not suggest merging both the containers instead, this principle suggests each container requires a dependent configuration to run respective containers.

Runtime confinement Principle (RCP)

The Runtime Confinement Principle states that every container should declare its resource requirements and pass that information to the hosted platform.

The SCP addresses the build time perspective and RCP addresses the runtime perspective. The container is not just a single black box but it has multiple dimensions as follows

  • CPU usage dimension
  • Memory usage dimension
  • Resource consumption dimension
  • Control groups dimension
  • …..

The container shares the resource profile of a container to a hosted platform in terms of CPU, memory, networking, disk influence on how the platform performs scheduling, auto-scaling, capacity management, and SLAs of the container.

In addition to passing the resource requirements to the host platform, it is also important that the application stay confined to the indicated resource requirements. If the application stays confined, the platform is less likely to consider it for termination and migration when resource starvation occurs.

--

--

Shivakumar Goniwada Rudrappa
Shivakumar Goniwada Rudrappa

Written by Shivakumar Goniwada Rudrappa

Author, Innovator, Technology Strategist and Enterprise Chief Architect

No responses yet