[Short Tip] Plot live-data in Linux terminal

Recently I realized that one of the disks in my server had died. After the replacement, the RAID sync started – and I quickly had to learn that this was going to take days (!). But I also learned that the time it might take massively jumped up and down.

Thus I thought it would be fun to monitor the progress of this. First, I just crated a command to watch the minutes (calculated into days) every few seconds with watch:

watch 'cat /proc/mdstat |grep recovery|cut -d " " -f 13|cut -d "=" -f 2|cut -d "." -f 1|xargs -n 1 -I {} echo "{}/60/24"|bc'

But since it was jumping so much I was wondering if I could live-plot the data in the terminal (remote server, after all). There are many ways to do that, even gnuplot seems to have an options for that, but I wanted something more simple. Enter: pipeplot

First I tried to use watch together with pipeplot, but it was easier to just write a short for loop around it:

while true;
do
  cat /proc/mdstat |grep recovery|cut -d " " -f 13|cut -d "=" -f 2|cut -d "." -f 1|xargs -n 1 -I {} echo "{}/60/24"|bc;
  sleep 5;
done \
| pipeplot

And the result is rather nice (also shown in the header image):

[Short Tip] Accessing tabular nushell output for non-nushell commands

After I learned how subshells can be executed within nushell I was confident that I could handle that part. But few minutes ago I run into an error I didn’t really understand:

❯ rpm -qf (which dwebp)
error: Type Error
   ┌─ shell:24:16
   │
24 │ rpm -qf (which dwebp)
   │                ^^^^^ Expected string, found row

I thought the parameter was provided somehow in the wrong way, and put it into quotes: "dwebp". But it didn’t help. I tested around more with sub-shells, some of them worked while others didn’t. The error message was misleading for me, letting me think that there is a difference in how the argument is interpreted:

❯ rpm -qi (echo rpm)
Name        : rpm
Version     : 4.16.1.3
Release     : 1.fc34
[...]

❯ echo (which dwebp)
───┬───────┬────────────────┬─────────
 # │  arg  │      path      │ builtin 
───┼───────┼────────────────┼─────────
 0 │ dwebp │ /usr/bin/dwebp │ false   
───┴───────┴────────────────┴─────────

It took me a while until I understood what I was looking at – and to make the error message make sense: the builtin nushell command which can give back multiple results, thus returning a table. The builting nushell command echo returns a string!

Thus the right way to execute my query is to get the content of the cell of the table I am looking at via get:

❯ rpm -qf (which dwebp|get path|nth 0)
libwebp-tools-1.2.1-1.fc34.x86_64

Note that nth 0 is not strictly necessary here since there is only one item in the table anyway. But it might help as a reference for future examples.

You don’t have to use pipe, btw., there is an even shorter way available:

❯ rpm -qf (which dwebp|nth 0).path
libwebp-tools-1.2.1-1.fc34.x86_64

Figuring out the container runtime you are in

Containers are meant to keep processes contained. But there are ways to gather information about the host – like the actual execution environment you are running in.

Containers are pretty good at keeping everyone and everything inside their boundaries – thanks to SELinux, namespaces and so on. But they are not perfect. Thanks to a recent Azure security flaw I was made aware of a nice trick via the /proc/ file system to figure out what container runtime the container is running in.

The idea is that the started container inherits some /proc/ entries – among the entry for /proc/self/. If we are able to load a malicious container, we can use this knowledge to execute the container host binary and by that get information about the runtime.

As an example, let’s take a Fedora container image with podman (and all it’s library dependencies) installed. We can run it and check the version of crun inside:

❯ podman run --rm podman:v3.2.0 crun --version
crun version 0.20.1
commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL

Note that I take an older version here to better highlight the difference between host and container binary!

If we now change the execution to /proc/self/exe, which points to the host binary, the result shows a different version:

❯ podman run --rm podman:v3.2.0 /proc/self/exe --version
crun version 1.0
commit: 139dc6971e2f1d931af520188763e984d6cdfbf8
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL

This is actually the version string of the binary on the host system:

❯ /usr/bin/crun --version
crun version 1.0
commit: 139dc6971e2f1d931af520188763e984d6cdfbf8
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL

With this nice little feature we now have insight into the version used – and as it was shown in the detailed write-up of the Azure security problem mentioned above, this insight can be crucial to identify and use the right CVEs to start lateral movement.

Of course this requires us to be able to load containers of our choice, and to have the right libraries inside the container. This example is simplified because I knew about the target system and there was a container available with everything I needed. For a more realistic attack container image, check out whoc.

[Howto] Installing Cilium with Minikube on Fedora

Cilium is a networking plugin for Kubernetes based on eBPF. If you want to give it a try, Minikube is a good option to get started.

Background

I just started with Isovalent – and since I am very much a beginner regarding everything related to Kubernetes I decided to get some hands-on experience with the technology I am going to work with for the foreseeable future.

Isovalent’s offering is an Enterprise version of Cilium which basically manages and secures connections between containers and adds observability to it. It all runs on eBPF and thus is pretty performant. eBPF can run sandboxed programs in Linux kernel space without the need to recompile the kernel; A tiny bit like a “Kernel VM”. I always wanted to get my hands dirty with eBPF anyway, and Cilium is a very good way to approach it. But where to start? The answer is: with a small Kubernetes setup based on Minikube, a tiny Kubernetes distribution for testing and fooling around which leaves your main system almost unchanged.

Preparing the environment

Minikube runs itself in a tightly confined environment to not disturb your other systems. This abstraction is done via containers or VMs realized via so called “drivers”. Drivers are available for Docker, VMWare, KVM, Podman and others. I decided to go with the KVM driver, so the virtualization bits need to be installed:

❯ sudo dnf install @virtualization
[...]
❯ sudo systemctl start libvirtd
❯ sudo systemctl enable libvirtd
❯ sudo usermod --append --groups libvirt ( whoami )

Note in the above commands that the last command only works in Nushell and has to be slightly adjusted for Bash or Zsh.

Next we have to install Minikube itself:

❯ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 15.1M  100 15.1M    0     0  8304k      0  0:00:01  0:00:01 --:--:-- 8300k

❯ sudo rpm -Uvh minikube-latest.x86_64.rpm
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:minikube-1.22.0-0                ################################# [100%]

Also, to install and manage Cilium easily it makes sense to use the Cilium CLI. Unfortunately the CLI is currently not available as a RPM package for Fedora, so we have to install the binary and move it to /usr/local/bin:

❯ curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
[...]
❯ sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
cilium-linux-amd64.tar.gz: OK
❯ sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin

Starting Minikube with CNI

We now need to start up our Kubernetes cluster, and it needs to be in a way that we can install and use Cilium in it. So we set the network configuration to CNI:

❯ minikube start --network-plugin=cni
😄  minikube v1.22.0 on Fedora 34
✨  Automatically selected the kvm2 driver. Other choices: podman, ssh
💾  Downloading driver docker-machine-driver-kvm2:
    > docker-machine-driver-kvm2....: 65 B / 65 B [----------] 100.00% ? p/s 0s
    > docker-machine-driver-kvm2: 11.47 MiB / 11.47 MiB  100.00% 12.50 MiB p/s 
❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
💿  Downloading VM boot image ...
    > minikube-v1.22.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.22.0.iso: 242.95 MiB / 242.95 MiB [ 100.00% 20.05 MiB p/s 12s
👍  Starting control plane node minikube in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.21.2 on Docker 20.10.6 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Installing Cilium into Kubernetes

Since Cilium CLI is already installed, it is fairly easy to install Cilium into the cluster itself. The installation is done into the current kubectl context, so make sure you are running in the right context for example with kubectl get nodes. Afterwards, fire up the installation:

❯ cilium install
🔮 Auto-detected Kubernetes kind: minikube
✨ Running "minikube" validation checks
✅ Detected minikube version "1.22.0"
ℹ️  using Cilium version "v1.10.2"
🔮 Auto-detected cluster name: minikube
🔮 Auto-detected IPAM mode: cluster-pool
🔮 Auto-detected datapath mode: tunnel
🔮 Custom datapath mode: tunnel
🔑 Generating CA...
2021/07/13 14:09:33 [INFO] generate received request
2021/07/13 14:09:33 [INFO] received CSR
2021/07/13 14:09:33 [INFO] generating key: ecdsa-256
2021/07/13 14:09:33 [INFO] encoded CSR
2021/07/13 14:09:33 [INFO] signed certificate with serial number 122640105911298337607907666763746132599853501126
🔑 Generating certificates for Hubble...
2021/07/13 14:09:33 [INFO] generate received request
2021/07/13 14:09:33 [INFO] received CSR
2021/07/13 14:09:33 [INFO] generating key: ecdsa-256
2021/07/13 14:09:33 [INFO] encoded CSR
2021/07/13 14:09:33 [INFO] signed certificate with serial number 459020519400202498147292503280351877404424824247
🚀 Creating Service accounts...
🚀 Creating Cluster roles...
🚀 Creating ConfigMap...
🚀 Creating Agent DaemonSet...
🚀 Creating Operator Deployment...
⌛ Waiting for Cilium to be installed...
⌛ Waiting for Cilium to become ready before restarting unmanaged pods...
♻️  Restarting unmanaged pods...
♻️  Restarted unmanaged pod kube-system/coredns-558bd4d5db-8s4f6
♻️  Restarted unmanaged pod kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4-ctq4p
♻️  Restarted unmanaged pod kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-5wkbx
✅ Cilium was successfully installed! Run 'cilium status' to view installation health

The installation went through flawlessly. But does it really work? As mentioned in the last line of the above listing, we can check the status of Cilium easily:

❯ cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         disabled
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

DaemonSet         cilium             Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:       cilium             Running: 1
                  cilium-operator    Running: 1
Image versions    cilium-operator    quay.io/cilium/operator-generic:v1.10.2: 1
                  cilium             quay.io/cilium/cilium:v1.10.2: 1

We can even get one step further and check the connectivity of the cluster – after all, Cilium is all about proper networking:

❯ cilium connectivity test
ℹ️  Single-node environment detected, enabling single-node connectivity test
ℹ️  Monitor aggregation detected, will skip some flow validation steps
[...]
..
✅ All 11 tests (76 actions) successful, 0 tests skipped, 0 scenarios skipped.

As you see Cilium creates a set of pods and a service in a dedicated namespace and runs tests on them afterwards.

Interacting with the Cilium agent

Let’s have a first look at our installed Cilium environment by running a few commands on the local Cilium agent. First we have to figure out the name of the actual Cilium pod:

❯ minikube kubectl -- -n kube-system get pods -l k8s-app=cilium
NAME           READY   STATUS    RESTARTS   AGE
cilium-8hx2v   1/1     Running   0          35m

With the name of the pod we can now reach into the pod and execute the Cilium command right inside, for example querying the list of endpoints:

❯ minikube kubectl -- -n kube-system exec cilium-8hx2v -- cilium endpoint list
Defaulted container "cilium-agent" out of: cilium-agent, ebpf-mount (init), clean-cilium-state (init)
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                           IPv6   IPv4         STATUS   
           ENFORCEMENT        ENFORCEMENT                                                                                                                            
208        Disabled           Disabled          7182       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                   10.0.0.70    ready   
                                                           k8s:io.cilium.k8s.policy.cluster=minikube                                                                         
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                   
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                       
                                                           k8s:k8s-app=kube-dns                                                                                              
452        Disabled           Disabled          4506       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-test                   10.0.0.254   ready   
                                                           k8s:io.cilium.k8s.policy.cluster=minikube                                                                         
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                                   
                                                           k8s:io.kubernetes.pod.namespace=cilium-test                                                                       
                                                           k8s:kind=client                                                                                                   
                                                           k8s:name=client2                                                                                                  
                                                           k8s:other=client                                             
[...]  

This list is long, detailed and only really makes sense on a wide monitor. But it already tells us a lot about the current enforcement of ingress and egress policies (here they are not enforced as of yet).

But there is more: since Cilium is eBPF based, we can go one layer deeper, and for example look at the policy related eBPF maps:

❯ minikube kubectl -- -n kube-system exec cilium-8hx2v -- cilium bpf policy get --all
Defaulted container "cilium-agent" out of: cilium-agent, ebpf-mount (init), clean-cilium-state (init)
/sys/fs/bpf/tc/globals/cilium_policy_00208:

POLICY   DIRECTION   LABELS (source:key[=value])   PORT/PROTO   PROXY PORT   BYTES     PACKETS   
Allow    Ingress     reserved:unknown              ANY          NONE         16959     183       
Allow    Ingress     reserved:host                 ANY          NONE         1098509   4452      
Allow    Egress      reserved:unknown              ANY          NONE         393706    4204  
[...]

Note that the policy number is related to the endpoint ID in the Cilium endpoint list above.

We now have a running Cilium setup which can be used to run tests and examples!

Next: write and enforce policies, add observability

Doing a policy enforcement test goes beyond of the scope of this blog post – but it certainly is worth a look in the future. Also with all the data already shown above it makes sense to make a deep-dive into the topic of observation in the future.

If you already want to check policy enforcement out on your own the Cilium documentation has a beautiful example prepared which walks through some policy challenges and how those can be answered with Cilium.

The same is true for observability: if you wonder how deep the rabbit hole really is there is Hubble which provides serious observability into the Kubernetes network, services and security, comes with a UI and can be quickly installed since it is tightly integrated with Cilium.

And if you have stories to share around eBPF, Cilium and similar topics I am finally getting an idea of what you are talking about. 😉

Image by stux from Pixabay

Hello Isovalent!

As mentioned in my last post I left Red Hat – and today is my first day at Isovalent!

In my new position I will be a technical marketing manager and thus working on technical content, messaging and enablement. With Cilium Enterprise Isovalent offers an eBPF based solution for Kubernetes networking, observability, and security – and since I am rather new to Kubernetes, I expect a steep learning curve.

I am looking forward to the challenges ahead of me, and will drop a blog post about it once in a while =)