“Hello World” in Kubernetes

Posted: 11th November 2018 by knoppi in Computer Stuff, Linux
Tags: , , ,

A “Hello World” in Kubernetes should be very similar to one in Docker. Oh no, wait, actually, it should not.

Using Kubernetes does not necessarily require knowledge about Docker. Of course, it is essential to know what containers are, but understanding every option of Docker that you can use on the command line is not important.

So, while in Docker you would start with something like

docker run hello-world

there is nothing similar for Kubernetes.

But the docker command is running (and possibly fetching) an image called hello-world and creating a container from it. In Kubernetes, this image would be used, too. This is the overlap between the two approaches from a user’s perspective. (Underneath, of course, Kubernetes calls the Docker API but the human-being controlling it does not need to bother with it.)

So, how would Kubernetes create a container out of the hello-world-image? First thing: Kubernetes users don’t deal with containers, the look at pods (or more complex entities). In the simplest case a pod is just a container plus some information about its lifecycle. Running the hello-world image requires a file with something like this in it:

apiVersion: v1
kind: Pod
metadata:
    name: hello-world
spec:
    containers:
        - name: hello-world
          image: hello-world

Assuming the file is called hw.yaml, you would create a pod with the command kubectl create -f hw.yaml. For the sake of simplicity, I just assume that there is already some Kubernetes cluster and that kubectl is correctly set up. The reply would then be

pod/hello-world created

Actually, no “Hello World”-message is visible in this case. To make it appear, type

kubectl logs hello-world

and the screen should show

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

as with the pure docker command before. And at this point, we’re finished with the “Hello World”.

Advantages?

In this scenario, apparently, there lies no benefit in the use of Kubernetes. Additionally, if a real cluster is used, the status might be irritating. kubectl get pod would most likely yield something like CrashLoopBackOff for hello-world. This happens because they have a policy to restart containers when they exit. The used container just executes one command and exits which leads to the described behaviour.

But this example also tells a bit about the purpose of Kubernetes. First, it is meant for long-running jobs that can sustain a short interruption. Second, it is meant for non-interactive jobs — or only for indirect interaction.

Kubernetes is a resource manager that can handle two types of resources: compute power and storage. Users deliver request for one or both of them. Unfortunately, this is done within a kind of hierarchy of objects which in the Kubernetes language are called resources themselves. A pod, for instance, is a Kubernetes resource. Also a “service” is a Kubernetes resource. You can create, modify or delete them using the commands

kubectl [create|edit|delete] [pod|service|...]

Users interact with the Kubernetes API, usually via the kubectl tool, which is installed on a master of the cluster. The actual work is done by the nodes, but the user never directly interacts with them. In this sense, Kubernetes is very similar to conventional resource managers. Think of compute clusters that can do batch processing for data science, engineering or similar tasks. They are most often running something like openPBS / TORQUE / HT Condor, or by now more often nomad or Mesos. That’s probably the reason why Kubernetes is so often compared to those. And in a general sense this a valid comparison. Indeed, I was considering all of those tools for setting up a scientific cluster.

Kubernetes is limited to containerized software — in my case, this was a reason not to choose it. Moreover, it uses milliseconds as a unit for scheduling, and allows jobs to “burst”. In classical resource managers, users request a whole CPU. This is guaranteed for the requested time. In Kubernetes, on the other hand, the assigned resources are more volatile. Think of a cluster of servers for web applications. They might house the containers of companies from all over the world.
Naturally the visits depend on the different timezones. Also, special events might trigger a higher usage of a certain service. That’s where Kubernetes can jump in very easily and redistribute the cluster resources between the users.

Suggested readings: