Skip to content

Orchestration

Container orchestration is the automatic process of managing or scheduling the work of individual containers for applications based on microservices within multiple clusters.

Why do we need orchestration?

Container orchestration is used to automate the following tasks at scale:

  • Configuring and scheduling of containers
  • Provisioning and deployments of containers
  • Availability of containers
  • The configuration of applications in terms of the containers that they run in
  • Scaling of containers to equally balance application workloads across infrastructure
  • Allocation of resources between containers
  • Load balancing, traffic routing and service discovery of containers
  • Health monitoring of containers
  • Securing the interactions between containers.

Container orchestration automates the deployment, management, scaling, and networking of containers.

Features

IP and networking management

Containers, by default, run as a new IP every time they spin up. Imagine having to capture the new IP/port configuration and set it in a load balancer manually every time the application went down. Now imagine doing this with ten, fifteen, or twenty applications all running in their own container! Container orchestration handles all the networking across applications within the cluster.

Declarative infrastructure and deployments

Following infrastructure as code (IaC) and configuration management, the infrastructure and its configuration as well as the application in an environment should be defined as code. This provides insight and traceability into what exists from environment to environment.

Application health checks

We want to ensure the application is ready before sending traffic to it. In software systems, components can become unhealthy due to transient issues (such as temporary connectivity loss), configuration errors, or problems with external dependencies. Orchestrators have two approaches to health checks, ensuring the pod is ready to receive traffic.

  • A readiness probe indicates whether the container is ready to service requests. If the readiness probe fails, the endpoints controller removes the pod’s IP address from the endpoints of all services that match the pod.

  • A liveness probe indicates whether the Container is still running. If the readiness probe fails a container, the endpoints controller ensures the container has its IP address removed from the endpoints of all services.

Benefits

Horizontal-pod autoscaling (HPA)

Right-sized pods are fit to serve the needs of a container. As load increases, orchestrators automatically find new nodes to run pods in order to meet the demands of users, as defined in replica sets. This autoscaling feature amplifies the benefits and popularity of microservices.

Self-healing

A container orchestrator restarts containers that fail, replaces containers, kills containers that don’t respond to user-defined health check, and doesn’t advertise them to clients until they are ready to serve. This automatic failover feature minimizes loss of services to customers and increases availability.

Zero downtime deployments

With auto-healing capabilities, orchestrators can implement automatic rollbacks on unstable or failed deployments. This enables teams to finally deliver code during business hours with built-in safety nets.

Promotes experimentation

Various deployment strategies are easy to implement, including canary, A/B, and blue/green. These provide teams with a platform to roll out new features to a select number of users before releasing to their entire customer base.

Architecture

  • Underlying host. This can be physical, virtual, or cloud-based hosts
  • Host Container. The container (normally docker) running on the host
  • Orchestration. This is the entire cluster where the pieces of kubernetes is living
  • Application Management. Tools and processes built in to keep the application running