Managing Docker containers with Kubernetes
Container Keeper
After you jump onto the container bandwagon, you will find yourself looking for high-performance solutions for managing the Docker landscape. Several vendors offer special operating system images with built-in container management tools. Red Hat uses Atomic with Google's Kubernetes management tool.
Running applications in containers is becoming increasingly popular. Containers offer many benefits compared with conventional virtual machines. Docker is a popular container system for Linux that needs only a very minimal base system, so using a conventional, multipurpose operating system with its large collection of miscellaneous components adds a huge overhead if you know all you really want to do is host containers.
The container environment lends itself to portability. In large enterprise environments, IT managers do not want to worry about the type of system a container runs on. The focus is on defining an application with the necessary requirements and deploying it on existing resources. Whether the application actually runs in a container on System A or System B is unimportant.
The need for portability and efficiency has led to development of some special Linux distributions tailored for the container environment. These special distros offer a uniform operating environment, include container management tools, and perhaps most importantly they are optimized for containers – without the feature bloat associated with multipurpose systems.
In addition to today's popular init system, systemd, and some basic kernel components such as SELinux and CGroups, these container systems only use a very small software stack. The toolkit obviously includes Docker as the container engine and, often, Kubernetes [1], a container management and orchestration tool by Google.
The operating system image can operate in many different environments. For example, the image will run on classical bare-metal systems, but it also works for public and private cloud environments, such as Google Cloud Engine (GCE), Red Hat OpenShift, OpenStack, or Amazon Web Services (AWS).
Red Hat's Project Atomic [2] is one of these container-based operating systems. Atomic was specially designed for use in containers on the basis of Docker and Kubernetes. Project Atomic is the upstream project for several other images. For example, Red Hat [3], Fedora [4], and CentOS [5] rely on the Atomic project to create their own cloud images for use with Docker containers.
These Atomic images are very different from a garden-variety Linux distribution. For example, Atomic does not use a package manager and instead relies on the rpm-ostree
tool. Atomic updates are possible for the Atomic hosts because the complete operating system instance resides in a single filesystem path below the /ostree/deploy
folder. At boot time, the latest version of the operating system is then mounted below the root filesystem, where only the /etc
and /var
directories are writable. All other folders under /ostree
are only readable.
An update now simply means copying a complete new operating system instance from the update server to /ostree/deploy
and applying the changes to the configuration files below /etc
to the new operating system instance. The /var
directory is shared between all instances, because shared components such as the user's home directory live there. (The /home
folder is only a symbolic link to /var/home
.) To start the new instance of the operating system, reboot the host system.
You cannot install additional software on an Atomic host as you would with RPM or Yum. Instead, you should either run the software in a separate container on the Atomic host or copy to a folder below /var
. In this case, make sure you have statically compiled the programs, and make all changes in each operating system instance in this folder using the rpm-ostree
tool – not manually. Using rpm-ostree update
lets you perform an update of the system. If this update does not work the way you imagined it would, you can restore the system to its original state using rpm-ostree rollback
. Instead of rpm-ostree
, you can also simply call the atomic
tool, which points to rpm-ostree
through a soft link.
Working with Docker Containers
If you are familiar with Docker containers, you know they are also based on images. These images come either from a central or local Docker registry server or are generated with the help of a Docker file. You can call docker run
to then start the desired application within a container. The following example shows the famous Hello World in a Fedora container:
docker run fedora /bin/echo "hello world"
The image with the name fedora
is not present locally at this time; instead, Docker downloads it independently from the predefined registry server and then runs the /bin/echo "hello world"
command within the Fedora instance. Then the container terminates. A call to docker ps -a
displays all containers running on the host.
Instead of using the echo
command, you could, of course, call a script at this point to launch a preconfigured web server. If the server uses a database back end, create an additional container with just this database and link the two. In small environments, this approach is certainly perfectly okay, but as of a certain size, you would need a solution that scaled better. For example, you would want the ability to start a container or a set of containers on remote hosts. It is also useful to define a status for the applications. If you use Docker to start a container on a host, there are no guarantees that the container will restart on another system in case of a host failure.
Container Orchestration with Kubernetes
Kubernetes offers management and orchestration for container environments. The tool consists of a wide range of services, some of which run on a control system, some on the master host, and others on each Docker host (aka, "the minions"). The app service on the master provides a REST API as a communication channel through which it receives service instructions from clients. An instruction might, for example, generate a specific container on an arbitrary minion host, or a pod in Kubernetes-speak.
Usually a pod houses containers for services you would like to install together on conventional systems. A file in JSON format contains all the necessary information. For example, what image should be used for the pod's container and the port on which to listen for services within the container. The minion hosts run an agent service, "kubelet," and receive instructions from the master.
The etcd
service is used as the communication bus. Etcd is a distributed key/value database [6] that relies on simple HTTP GET
and PUT
statements. The etcd database stores the configuration and status information for the Kubernetes cluster and returns the data when needed in JSON format. The kubelet
service on a minion host constantly queries the database for changes and, if necessary, implements the changes. For example, the database can contain a list of all minion hosts in the cluster. This information is then used by the app service to find hosts on which to generate new containers.
Installing the Atomic Host
To dive into the world of Kubernetes, download one of the Atomic host images available for Red Hat Enterprise [3], Fedora [4], or CentOS [5] and install it in your virtualization environment. For this article, I used a local KVM-based installation on Fedora 21 with a CentOS 7 Docker image. The image is easily installed using the virt-manager
tool or virt-install
. For setup instructions for different virtualization environments, see the Red Hat documentation [7]. Note that newer versions of Fedora, and updates for CentOS 7 have appeared since the versions used in this article. Container technologies are in rapid development, so you might find some differences from this configuration in your own environment, but the concepts and basic procedures are similar.
The first time you create a virtual machine, you will need to provide a CD in the form of an ISO file. The file contains basic information about the virtual Atomic system, such as the machine name and the password for the default user. You can pass in an SSH key to log on to the system or the desired network configuration. Create the metadata and user data files for this purpose and use them to generate the ISO file (Listing 1); then, provide the file to the Atomic host as a virtual CD drive. When you first start the system, the cloud-init
service [8] parses the information you provided and configures the system.
Listing 1
Meta Configuration Files
If the installation and configuration work, you can then log on to the virtual system to perform an update. In this case, a new instance of the system is downloaded and then activated at the next system start-up time:
ssh centos@atomic.example.com rpm-ostree upgrade systemctl reboot
Because this system is the master host, you can install a second host with the same image directly after configuration. The second host will act as the minion host running the container pods. Of course, at this point, you can install as many minions as you like. A single minion host, however, is enough to understand the basic functionality of Kubernetes. To set up the minion host, generate a second virtual machine and, as described in Listing 1, an additional ISO file, which is then available for the minion host installation. After the installation, update this system and restart.
Once the master and minion are up to date, add the two computers to the /etc/hosts
file and modify the Kubernetes configuration file /etc/kubernetes/config
. Enter the master server on both systems via the KUBE_ETCD_SERVER
variable. The current version of Kubernetes only supports a single master, but this will change in future releases. On the master, modify two more files: the /etc/kubernetes/apiserver
and /etc/kubernetes/controller-manager
files. Define the hostname and the port for the API service, as well as the minion server hostname. Following this, start all the necessary services on the master and then make sure everything is working correctly:
systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler systemctl enable etcd kube-apiserver kube-controller-manager kube-scheduler systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler
On the minion host, you additionally need to configure the /etc/kubernetes/config
file to customize the minion agent's configuration. Open the /etc/kubernetes/kubelet
file and add the hostname, port, and IP address on which you want the service to listen. Then restart the necessary services:
systemctl start kube-proxy kubelet docker systemctl enable kube-proxy kubelet docker systemctl status kube-proxy kubelet docker
At this point, you should see the minion host on the master. The kubectl
tool is used to communicate with the API server:
kubectl get minion NAME atomic-host-001
In the next step, you can create your first pod. As a reminder, this means one or more containers that are provided on one of the available minion hosts. The definition of the pods relies on a file in JSON format, where you define all the information for the pod. This information includes the Docker image to use, port services, and optional port mapping between the container and host. You can also decide on the host filesystems you want to bind to the container. This step is especially important because, if you don't bind the container to a host filesystem, any data that changes within the container is lost after terminating the container.
Each pod can be equipped with one or more labels. For example, you could assign the label name = apache
and stage = prod
in the JSON file for all Apache servers in production. With a corresponding query via kubectl
, you can then very easily identify your production Apache servers and discover which minions they are currently running on. But first you need to create your first pod with the file in Listing 2. Call the kubectl
tool as follows:
Listing 2
Definition of a Pod
kubectl create -f /tmp/apache-pod.json
In the background, the Docker process starts its work on the minion and begins to download the fedora/apache
image if it does not already exist. This download can take quite a while. When you call kubectl
again, you should see that the container is active (Listing 3).
Listing 3
Kubectl Shows Active Pods
To see if the Apache service in the container is working as usual, make a simple call to curl
:
curl http://atomic-host-001 Apache
If you have multiple Apache containers running in your environment, you can restrict the output of kubectl get pods
based on the previously defined labels. The command
kubectl get pods -l name=apache -l stage=prod"
tells Kubernetes to show you only the containers with two labels: name=apache
and prod=stage
.
As you can see in Listing 3, the definition of the pod also contains a note to the effect that a container needs to be immediately restarted in the event of an error (restartPolicy:always
). Finding out if this works is easy: Log onto the minion host via SSH and tell Docker to display the currently active container (Listing 4).
Listing 4
Docker Listing Active Containers
Now terminate the container manually by entering:
docker stop a9548bd9ecb1
After a short time, you will notice that docker automatically launches the container again. Watch the value in the CREATED column in the output from docker ps
before and after manually stopping the container.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Gnome Fans Everywhere Rejoice for the Latest Release
Gnome 47.2 is now available for general use but don't expect much in the way of newness, as this is all about improvements and bug fixes.
-
Latest Cinnamon Desktop Releases with a Bold New Look
Just in time for the holidays, the developer of the Cinnamon desktop has shipped a new release to help spice up your eggnog with new features and a new look.
-
Armbian 24.11 Released with Expanded Hardware Support
If you've been waiting for Armbian to support OrangePi 5 Max and Radxa ROCK 5B+, the wait is over.
-
SUSE Renames Several Products for Better Name Recognition
SUSE has been a very powerful player in the European market, but it knows it must branch out to gain serious traction. Will a name change do the trick?
-
ESET Discovers New Linux Malware
WolfsBane is an all-in-one malware that has hit the Linux operating system and includes a dropper, a launcher, and a backdoor.
-
New Linux Kernel Patch Allows Forcing a CPU Mitigation
Even when CPU mitigations can consume precious CPU cycles, it might not be a bad idea to allow users to enable them, even if your machine isn't vulnerable.
-
Red Hat Enterprise Linux 9.5 Released
Notify your friends, loved ones, and colleagues that the latest version of RHEL is available with plenty of enhancements.
-
Linux Sees Massive Performance Increase from a Single Line of Code
With one line of code, Intel was able to increase the performance of the Linux kernel by 4,000 percent.
-
Fedora KDE Approved as an Official Spin
If you prefer the Plasma desktop environment and the Fedora distribution, you're in luck because there's now an official spin that is listed on the same level as the Fedora Workstation edition.
-
New Steam Client Ups the Ante for Linux
The latest release from Steam has some pretty cool tricks up its sleeve.