{"id":104,"date":"2020-04-07T15:18:31","date_gmt":"2020-04-07T13:18:31","guid":{"rendered":"http:\/\/blog.nikster.de\/wordpress\/?p=104"},"modified":"2020-04-07T15:18:31","modified_gmt":"2020-04-07T13:18:31","slug":"glossary-on-linux-container-technology-runtimes-and-orchestrators","status":"publish","type":"post","link":"https:\/\/blog.nikster.de\/wordpress\/index.php\/2020\/04\/07\/glossary-on-linux-container-technology-runtimes-and-orchestrators\/","title":{"rendered":"Glossary on Linux container technology, runtimes and orchestrators"},"content":{"rendered":"\n<figure class=\"wp-block-table\"><table><tbody><tr><td>Microservices<\/td><td>Lots of \u201eslim\u201c and &#8220;autonomous&#8221; Processes, scalable seperately and, in the best case, replicable.<br><br><strong>Pro:<\/strong> scalability and resilience<br><strong>Con:<\/strong> complexity and traceability\/transparency (Zipkin \u2192 Tracetool)<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td>Container \/ Vms<\/td><td>Container technologies (here: Docker, rkt, etc.) use Linux Namespaces to enable runtime isolation for processes on the underlying OS (read: underlying Kernel, LXC, runc, etc.).<br><em>Example Namespaces are:<\/em><br>&#8211; mnt<br>&#8211; pid<br>&#8211; network<br>&#8211; user<br>&#8211; etc. (uts, ipc)<br><br><em>a little bit<\/em> like chroot (-&gt;mnt) but much more granular.<br><em>different from<\/em> (e.g.) <strong>Vms<\/strong> \u2192 Hypervisor on Ring X + dedicated OS, sep. Kernel) <em>or<\/em> <strong>Jboss<\/strong> as Containers (Runtime environments are located somewhere in between on the OS, Isolation through standard OS utils).<br><br> <strong>Pro<\/strong> <strong>\u201eContainer\u201c:<\/strong> saves resources, replicable (e.g. Images w. Pipeline)<strong><br> Pro Vms:<\/strong> security<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td>Namespaces + cgroups<\/td><td><strong>Linux Namespaces<\/strong>: <br>&#8220;quasi pseudo virtual machines&#8221; (overlay for Filesystem (mnt), Network, Pid, etc.).<br><br><em>build them yourself:<\/em><br>&#8220;unshare &#8211;fork &#8211;pid &#8211;mount-proc bash&#8221; <br>(forks a new PID Namespace through the syscall unshare and starts a  bash)<br><br><em>there you go:<\/em><br>PID 1 in a new space (and a second to see the first one with ps).<br>&#8220;Ps aux&#8221; \u2192 note pts and call &#8220;ps aux&#8221; from <em>outside<\/em> the new namespace to see it&#8217;s &#8220;<em>real<\/em>&#8221; PID.<br><br><em>nsenter:<\/em><br>calls a programm in the respective namespace (from the outside, read: the OS, just lookup the PID and go ahead).<em><br>Example:<\/em><br>&#8220;sudo nsenter -t 13582 &#8211;pid htop, -t = &#8220;PID of the \u201enamespace Bash\u201c &#8211;pid = &#8220;type of space&#8221;, Programm&#8221;<br><br><em>Cgroups:<\/em> <br>they limit and\/or prioritize the resources of the respective Space. <br>(similar to nice or quota).<br><em>Example:<\/em><br>&#8220;cgcreate -a nkalle -g memory:memgrp&#8221; <br>(create cgroup, -a Allowed User, -g Controller(in this case: Mem):Path(specify your own)<br><br><em>Seccomp-bpf:<\/em><br>Use it to limit syscalls in your namespace (read, write, kill, etc.).<br>In docker it is implemented via &#8220;<strong>&#8211;security-opt<\/strong>&#8221; and in kvm through &#8220;<strong>&#8211;sandbox<\/strong>&#8220;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td>Container Runtime<\/td><td>There exist several types of &#8220;container runtime&#8221;.<br>In Principle, everything that runs inside a container, can be called &#8220;Container Runtime&#8221;. <br><br>But:<br>There exist some standards though, created by the <strong>OCI<\/strong> (open container initiative).<br>For example: the <strong>runc<\/strong> Library (reference Implementation of OCI runtime specification).<br> <br>Dockers contribution to containerization is &#8220;only&#8221; the ease of running containers through more or less standardization, one could say.<br>Anyway: <br><br>Runtimes can be distinguished in <strong>High- and Low-Level runtimes.<\/strong><br><br><strong>Low<\/strong> (\u201ereal\u201c runtime): lxc, runc (\u201erun\u201c Containers based on Namespaces and CGroups) <br><strong>High<\/strong>: CRI-O (Container Runtime Interface), containerd (Docker) \u2192 implement additional Features like APIs (Docker uses it to implement additional Features implementiert, like downloading Images, unpacking, etc., the High-Level runtimes themselves use runc, though.) <br> <em>RKT<\/em> could be considered high-level (it uses runc, implements lots of Features, said to be more secure than Docker)<br> <br>build your own &#8220;container runtime&#8221; with standard Linux Tools:<br>create cgroup (cgcreate)<br>set attributes for cgroup (attr -s) <br>execute commands (sudo cgexec  -g memory:memgrp bash)<br>move it into it&#8217;s own namespace (unshare)  <\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td>Docker<\/td><td>Docker consist of:<br><strong>dockerd<\/strong> (Image building, e.g..: \u201edocker build\u201c with dockerfile)<br><strong>containerd<\/strong> ((Image)managment, e.g..: \u201edocker image list\u201c, etc.) (High-Level Runtime), https:\/\/containerd.io\/img\/architecture.png<br><strong>runc<\/strong> the Library containerd includes to spawn containers, etc.<br><strong>Docker images<\/strong> consist of Layers (e.g..: Debian Base Image). <br>So the Baseimage may be the same for all Containers gleich sein but the applications may be different ones.<br>For example: all Applications may use the same Libs (write-protected Layer) but also bring their own Libs (\u201eopen\u201c Layer).<br>If something must be written to one of the underlying Layers, it is done in a copy of that layer, which resides in Memory.<br><br>important commands:<br><strong>docker import<\/strong> (imports an Image &#8211; from a Tarball for example)<br><strong>docker image ls<\/strong> (shows all Images)<br><strong>docker tag<\/strong> (names the image)<br><strong>docker run -i -t stretchbase bash<\/strong> (executes a command Command (Bash) in a Container (named stretchbase and attaches you to that bash)<br><strong>docker run -d &#8211;name &#8220;apache-perl-test&#8221; &#8211;network test-net -p 8082:80 apacheperl<\/strong><br><strong>docker login<\/strong> (logs you into the respective docker repo, defaul is dockerhub)<br><strong>docker push<\/strong> (pushes the Image to the respective hub)<br><strong>docker build<\/strong> (builds the new Image, based on the base image and a dockerfile)<br><strong>docker exec<\/strong> (executes a command in an existing container, good for debugging)<br><strong>docker info<\/strong> (status information)<br><strong>docker logs<\/strong> (shows the Logs of a specific Container)<br><strong>docker inspect<\/strong> (shows Properties of a specific Container)<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td>Kubernetes<\/td><td><em>could be defined as: <\/em><br><em>Super-High Level Container Runtime<\/em> with CRI (Container Runtime Interface) as the Bridge between kubelet (\u201ethe heart\u201c of Kubernetes: Agent on Master and Nodes that manages everything) and the runtime. <br><br>A Container runtime that wants to work with Kubernetes, must support CRI.<br>https:\/\/storage.googleapis.com\/static.ianlewis.org\/prod\/img\/772\/CRI.png<br>supported (CRI)Runtimes:<br><em>Docker<\/em><br><em>containerd<\/em> (this is docker more or less)<br><em>CRI-O<\/em><br><br>CRI Specs:<br>CRI is a gRPC API (modern RPC) with a google specific Protocol.<br>(Protocol Buffers: https:\/\/developers.google.com\/protocol-buffers\/) \u2192 serialization Interface ala XML<br><br>Kubelet uses CRI to communicate with the Runtime (RPC: e.g.: pull Image, start\/stop Image\/Pod, etc.)<br>This can also be done manually via <em>crictl<\/em>.<br><br>Kubernetes consists of Masters and Workers.<br><br>One (or more) <strong>Master<\/strong>s contain the management tools:<br><strong><em>API-Server<\/em> <\/strong>(connects the components of the cluster)<br><strong><em>Scheduler<\/em> <\/strong>(assigns the components (e,g. Applications, Pods, etc.) to the workers)<br><strong><em>Controller-Manager<\/em><\/strong> (manages Cluster outages, Replication, Worker status, etc.)<br><strong><em>ETCD<\/em><\/strong> distributed Data storage, which always contains the cluster status as a whole.<br><br>One (or more) <strong>Worker<\/strong>s contain the <strong>Pods<\/strong> and\/or Applications:<br><em><strong>Docker<\/strong><\/em> (or another Runtime is running there (may also run on the masters))<br><strong><em>Kubelet<\/em><\/strong> (the Agent, responsible for communication between master and worker, which manages the containers, etc.)<br><strong><em>Kube-Proxy<\/em><\/strong> (Network- and Application Proxy)<br><br><strong>How does it work?<\/strong><br>To run an application (maybe consisting of several micro services), it must be described (yaml manifest) and published to the API.<br><br>The<strong> <em>Manifest<\/em><\/strong> contains all Information about the Components\/Applications, how they relate to each other (e.g. Pods), the workers they should be running on (optional), how many Replicas there should be (optional) and much more (more on the Topic and Formats: <a href=\"https:\/\/blog.nikster.de\/wordpress\/index.php\/2020\/01\/26\/how-to-set-up-a-devops-pipeline-with-gitlab-and-kubernetes\/\">here<\/a>).<br><br>The <em><strong>Scheduler<\/strong><\/em> manages which Container group (Pod) should run on which Node (monitoring the cluster resources and doing some magic).<br><br>The <strong><em>Kubelet<\/em><\/strong>, for example, tells the runtimes to download images, execute the Pods and much more.<br><br><strong><em>ETCD<\/em><\/strong> continually tracks and monitors the status of the cluster.<br>And, in case of malfunction of a component (e.g.: death of a node), a Pod would be restarted on another node by the controller manager and kubelet. <br><br><strong><em>important Commands<\/em><\/strong>:<br><em>kubectl<\/em> (main command for Mgmt., Settings, Deployments, etc.)<br><br>kubectl cluster-info (statusinfo)<br>kubectl cluster-info dump (dumps etcd content to the Console)<br>kubectl get nodes (lists all nodes)<br>kubectl describe node $nodename (returns Properties for the Node, CPU, Mem, Storage, OS, etc. pp.)<br>kubectl get pods (lists all pods) (tip: -o wide)<br>(tip: -o yaml, returns Pod descrition as yaml, useful for defining new Pods)<br>kubectl get services (lists exposed pods\/replicasets = services)<br>kubectl delete (service|pod|etc) (deletes)<br>kubectl port-forward $mypod 8888(localhost):8080(pod) (creates portforwarding on localhost, useful for debugging)<br>kubectl get po \u2013show-labels bzw. -L $labelname (groups pods by lables)<br><br><em><strong>how to create pods manually<\/strong><\/em> (not recommended, except for testing)<br><br><em>kubectl run<\/em> blabb \u2013image=gltest01.server.lan:4567\/nkalle\/jacsd-test\/jacsd:master<br>&#8211;port=8081 \u2013generator=run-pod\/v1 \u2013replicas=1 (creates a replication controller with 1 replica, without \u201ereplicas = normal pod)<br><br><em>kubectl scale<\/em> rc blabb &#8211;replicas=3 (scales replicas manually)<br><br><em>kubectl expose<\/em> (rc|pod) \u2013type=(ClusterIP|LoadBalancer|etc.)\\<br>\u2013name blabb-http \u2013(external|load-balancer)-ip=10.88.6.90 \u2013target-port=8080 \u2013port=6887 (target-port is inside the Container, port is on the ext. IP)<br>(exposes pod\/service outside the cluster\/node)<br><br>API-Doku: https:\/\/kubernetes.io\/docs\/reference\/<br><br><strong><em>Pods<\/em><\/strong><br>Pods are groups of containers, which exist in the same namespace and on one worker. <br>If a pod contains several containers, they run on the same worker (a pod may never use multiple workers).<br>A Process should run inside one contianer and a container should run inside a pod.<br>Everything that should be able to scale separateley, should also get it&#8217;s own pod.<br>(Example: Apache Frontend\/Mysql-Backend \u2192 2 Pods).<br><br><em><strong>create Pods with yaml<\/strong><\/em><br><br>Usually Pods aren&#8217;t created manually with the above commands, but through yaml Files which are published to the kubernetes API.<br>Read more about it <a href=\"https:\/\/blog.nikster.de\/wordpress\/index.php\/2020\/01\/26\/how-to-set-up-a-devops-pipeline-with-gitlab-and-kubernetes\/\">here<\/a>.<br><br>\u201eKubectl get pods $podname -o yaml\u201c is a good start to create a new Pod based on an already existing one.<br><br><em><strong>important parts of the yaml description:<\/strong><\/em><br><em>API Version<\/em>: which version to use v1 &#8211; stable, v1\/beta &#8211; Beta (Check the Link above).<br><em>metadata<\/em> (Name (name:), Namespace, Lables, Infos)<br><em>spec<\/em> (Content\/Container (containers:), Volumes, Daten)<br><em>status<\/em> (Status, internal IP, basics, not writable, Kubernetes Info!)<br><br>For orientation check the above API-Reference, but also <em>explain<\/em>:<br><em>kubectl explain<\/em> pods (describes all Attributes of the Object)<br><em>kubectl explain<\/em> pods.spec (describes all Fields of the Attribute: spec)<br>you get the idea.<br><br>kubectl create -f mypoddefinition.yaml (API is used through kubectl)<br><br><strong><em>Volumes:<\/em><\/strong><br>A Volume may be used by the entire Pod, but has to be mounted first.<br>popular Storage Solutions out there: nfs, cinder, cephfs, glusterfs, gitRepo \u2013 but there are way more&#8230;<br><br>Without a volume for shared use and persistence of Data, most Pods are useless.<br>For Testing, one may use emptyDir, which adds volatile storage to the pod.<br><br>For real persistence though, one will need one of the storage solutions mentioned above (nfs for example as it doesn&#8217;t take much effort to set it up).<br>This is called a Persistent Volume (or PV).<br>A PV can be configured for the whole cluster via the API:<br>&#8211; PV: kind_PersistentVolume<br><br>Next a &#8220;Persistent Volume Claim&#8221; has to be made.<br>&#8211; PVC: kind_persistentVolumeClaim<br><br>Now one may create a pod that references the PVC.<br><br>As everything Kubernetes, this is highly customizable and volumes may be claimed and provided dynamically or statically.<br><br>Read this:<br>https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\/<br>https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-persistent-volume-storage\/<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p> <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Microservices Lots of \u201eslim\u201c and &#8220;autonomous&#8221; Processes, scalable seperately and, in the best case, replicable. Pro: scalability and resilienceCon: complexity and traceability\/transparency (Zipkin \u2192 Tracetool) Container \/ Vms Container technologies (here: Docker, rkt, etc.) use Linux Namespaces to enable runtime isolation for processes on the underlying OS (read: underlying Kernel, LXC, runc, etc.).Example Namespaces are:&#8211; &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/blog.nikster.de\/wordpress\/index.php\/2020\/04\/07\/glossary-on-linux-container-technology-runtimes-and-orchestrators\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Glossary on Linux container technology, runtimes and orchestrators&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[44,40,3,31,43,39,42,41],"class_list":["post-104","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-cgroup","tag-chroot","tag-docker","tag-kubernetes","tag-namespace","tag-runc","tag-runtime","tag-virtual-machines","entry"],"_links":{"self":[{"href":"https:\/\/blog.nikster.de\/wordpress\/index.php\/wp-json\/wp\/v2\/posts\/104","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.nikster.de\/wordpress\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.nikster.de\/wordpress\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.nikster.de\/wordpress\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.nikster.de\/wordpress\/index.php\/wp-json\/wp\/v2\/comments?post=104"}],"version-history":[{"count":9,"href":"https:\/\/blog.nikster.de\/wordpress\/index.php\/wp-json\/wp\/v2\/posts\/104\/revisions"}],"predecessor-version":[{"id":114,"href":"https:\/\/blog.nikster.de\/wordpress\/index.php\/wp-json\/wp\/v2\/posts\/104\/revisions\/114"}],"wp:attachment":[{"href":"https:\/\/blog.nikster.de\/wordpress\/index.php\/wp-json\/wp\/v2\/media?parent=104"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.nikster.de\/wordpress\/index.php\/wp-json\/wp\/v2\/categories?post=104"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.nikster.de\/wordpress\/index.php\/wp-json\/wp\/v2\/tags?post=104"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}