What is Container Orchestration?
Container Orchestration can be considered as a method to easily manage and regulate numerous number of containers.
If your application is composed of only few containers, you can run, set and manage these containers by using only Docker commands. But corporate applications may well be composed of tens of and even hundreds of containers. In such a case, it gets harder to manage them over time. This is where the container orchestration tools step in.
In IT, Container Orchestration is used in dynamic environments with many containers and container start-ups. Container Orchestration is used to automate tasks such as creating containers, starting containers, distributing resources appropriately among containers, scaling containers upon need, swiping between servers for maintenance or due to inadequate resources, balancing load and monitoring the health of host servers and containers constantly.
How does Container Orchestration work?
There are tools such as Kubernetes and Docker Swarm for container orchestration. Depending on the used tool, configuration is made through YAML or JSON files. Through these configuration files, every configuration extending from the location that the orchestration tools will draw container images from (private registry or a public registry like Docker Hub) to the properties of the network connection between the containers, the storage location of the logs and to the method used for connecting the storage areas to containers can be managed. Generally, orchestration tool configurations are kept in a version control system like GIT. Thus, management with different GIT branches may be enabled for development, testing and live environments.
It is the container orchestration tool that starts up containers in different environments (like development, test and live).
Containers are started up in the form of replication groups by being retrieved from a public or private container registry. This method allows running desired number of container samples by distributing them onto servers.
When it is required to setup a container onto a server, the orchestration tool tries to find the best server to setup the container. And for this purpose, it automatically assesses the availability of memory and CPU resources, and other host restrictions, if any (definitions such as setting up a specific container on a specific server), selects the best server and starts the container on this server. In other words, besides resource availability situation, containers may be distributed onto servers based on restrictions and decisions defined with metadata of containers and servers.
Another important thing regarding the distribution of containers onto host servers is the extreme facilitation of running containers on public cloud such as Amazon Web Services (AWS), Google Cloud Platform and Microsoft Azure besides private cloud via the orchestration tool.
What are the available Container Orchestration Tools?
There are many container orchestration tools. The most common ones are open-source tools. The top three among these popular orchestration tools are as follows:
It is the de-facto standard tool among the container orchestration tools. Kubernetes has begun being developed under the name Borg for the own use of Google.
It was released by Google in 2014 as an open-source code. Currently, it is being supported by giant companies of cloud IT such as AWS, Microsoft, IBM, Intel and Cisco.
Kubernetes provides a significant contribution to DevOps approach by creating a sort of detachment between development teams and hardware administration, and by enabling PaaS (Platform as a Service) models. And this contribution increased its popularity.
Main Components of Kubernetes
Cluster: A virtual or physical server set established in master-slave format. Slave servers is named as 'workers' in Kubernetes terminology. Work loads, in other words the containers are distributed onto these worker servers by the master server considering numerous factors.
Master Server(s): Manages distribution and installation of containers onto servers. It runs by sending commands to worker servers. Kubernetes API server is located on the master server.
Kubelet: Available in worker servers. It works for starting, stopping and managing container-based applications depending on the commands sent by Kubernetes API server.
Pod: It is a group of containers sharing the same IP address, running together, stopped and scaled together. Containers located in the same pod are always on the same server and share the same resources. They are defined by a configuration file such as JSON or YAML.
The number one container orchestration tool embraced and supported by Docker. But, there is also an integrated orchestration tool developed by Docker. Docker Swarm is a container starting tool that is much easier than Kubernetes, but it lacks the expandable features of Kubernetes. Mirantis that acquired docker-ee, the corporate version of Docker made the following announcement for Docker Swarm:
The primary orchestrator going forward is Kubernetes.
"Mirantis is committed to providing an excellent experience to all Docker Enterprise platform customers and currently expects to support Swarm for at least two years, depending on customer input into the roadmap. Mirantis is also evaluating options for making the transition to Kubernetes easier for Swarm users"
Thus, Docker Swarm can be considered as a deprecated solution.
For the rest of the announcement, please visit:
Main Components of Docker Swarm
Swarm: Just like a server cluster, a swarm is a set composed of a group of master and worker servers.
Service: A group of task service processed by the worker servers and defined by the Swarm admin.
It defines the container images to be used and the commands to be used for each container.
Manager Server: Manages Swarm and transfers the commands to the worker servers.
Worker Server: These are the servers that fulfill the commands sent by the manager server and run your workloads. Worker servers constantly inform the manager server regarding the current status of Swarm.
Apache Mesos and Marathon
Apache Mesos is an open-source software developed by the University of California and currently being used by large organizations such as Uber, Twitter and PayPal.
Mesos offers a really simple interface and supports up to 10,000 servers. The quality of Mesos can be improved with a programming language such as Java, C++ or Python.
However, Mesos is a system that is focused more on server management. Various Mesos-based container orchestration tools are developed like Marathon ready to be used in live systems.
Main Components of Apache Mesos and Marathon
Master Daemon: Server component managing the agent daemons of slaver servers in master servers.
Agent Daemon: Manages the commands sent by framework (Marathon) to slave servers.
Framework: Mesos, itself, is not a container orchestration tool. Marathon takes the commands from Mesos master daemon and runs them. Here, Marathon is a framework according to the terminology of Mesos.
Offer: Master daemon of Mesos collects all information such as the memory and CPU resource status of managed servers, and directs them to Marathon. This information format is named as offer. Thus, Marathon learns the resource status of servers and distributes the containers accordingly.
Which Container Orchestration Tool Should You Choose?
The above mentioned container orchestration tools have advantages and disadvantages. But in short, Docker Swarm is good if you have a small organization and scalability is not your priority. However if you have tens of containers in your portfolio, you can consider choosing Kubernetes, whose popularity is rapidly increasing. Given the learning curve, Docker Swarm is the easiest and the most practical option, and Kubernetes is the option that requires the highest expertise and technical know-how.
As Consulta, we are always ready to assist you with our solution architects and technical team on container orchestration approaches, Kubernetes, CI/CD applications and container development processes.