In this deep-dive article we’ll be talking specifically about using Runecast as an admission controller. We leverage the admission controller functionality of Kubernetes and let you use Runecast as a validation admission webhook.
Runecast’s latest version, 6.2, has enabled a deeper protection and integration with Kubernetes, enabling scanning not just clusters, but nodes as well. If you’d like to read more about our KSPM capabilities, you can read this article on the improvements brought by version 6.2.
In this deep-dive article we’ll be talking specifically about using Runecast as an admission controller. We leverage the admission controller functionality of Kubernetes and let you use Runecast as a validation admission webhook. This means that your workloads are scanned before they reach the cluster, ensuring they are free from critical vulnerabilities or unpatched security risks.
Our Systems Engineer, Tomas Odehnal, walks you through all the steps below, including how to:
deploy Runecast to your cluster,
connect the cluster and generate a token,
configure the K8s API server,
create the webhook configuration
test your setup and ensure it’s all working correctly.
Initial setup
To demonstrate the functionality we have prepared a cluster with one node hosting the control-plane and one node to run the workload.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-mst01-test Ready control-plane 110m v1.24.3
k8s-wrk01-test Ready none 109m v1.24.3
Deploy Runecast into the K8s cluster
We will deploy an instance of Runecast to the cluster using helm. First, we add the Runecast helm repository:
$ helm repo add runecast https://helm.runecast.com/charts
"runecast" has been added to your repositories
After that we can deploy Runecast to the new namespace:
$ helm install runecast-analyzer runecast/runecast-analyzer --namespace runecast –-create-namespace --set nginx.service.tls.enabled=true
NAME: runecast-analyzer
LAST DEPLOYED: Mon Aug 1 15:23:13 2022
NAMESPACE: runecast
STATUS: deployed
REVISION: 1
NOTES:
To access the Runecast Analyzer application, follow the instructions:
Run the following command and visit https://127.0.0.1:9080/ to use the application:
kubectl --namespace runecast port-forward service/runecast-analyzer-nginx 9080
We enabled the secure connection on the service, as the Kubernetes API requires a secure connection to the admission webhook.
Once all the pods are ready, we can move onto the next step.
$ kubectl --namespace runecast get pods
NAME READY STATUS RESTARTS AGE
runecast-analyzer-6fb4b5d7f-w7g6l 1/1 Running 0 2m20s
runecast-analyzer-imagescanning-5c946df9b7-4lgtf 1/1 Running 0 2m20s
runecast-analyzer-nginx-dcdd9bbfd-r5md8 1/1 Running 0 2m20s
runecast-analyzer-postgresql-6cfbdd49d8-znm67 1/1 Running 0 2m20s
Connect the cluster and generate a token
To access the Runecast UI, we will start the port-forwarding as stated in the helm command output. In production, you would probably use ingress to reach the service from outside of the cluster, but in this example let’s settle for the localhost access.
$ kubectl --namespace runecast port-forward service/runecast-analyzer-nginx 9080
Forwarding from 127.0.0.1:9080 -> 9443
Forwarding from [::1]:9080 -> 9443
We can now login to Runecast UI on https://localhost:9080 with the default credentials, username
Finally, the last step of the wizard is to confirm all the settings and run the analysis.
In the UI, navigate to Settings > API Access Tokens and click on Generate API access token.
Copy the token and save it for later.
We can now move on to the next step and configure the webhook authentication on Kubernetes API server.
Configure the webhook authentication on the Kubernetes API server
To set up the control plane to authenticate to the Runecast API we need to:
create a kubeconfig file with the credentials of the Runecast API
create an admission configuration file pointing to the kubeconfig file
add the --admission-control-config-file parameter to the kube-apiserver manifest pointing and make sure kube-apiserver pod will see both configuration files
On the master node, we will create a new directory and later map this as a volume to kube-apiserver pod. While you can store the files in one of the directories that are already accessible by kube-apiserver in default configuration, it’s better to create a separate directory that might be used to store additional files, like auditing configuration.
sudo mkdir /etc/kubernetes/config
Create the kubeconfig file that contains the authentication token. We will use the token we generated in the previous chapter and the name of the Runecast nginx frontend service:
Now modify the kube-apiserver manifest file to use the admission configuration file. Under the - kube-apiserver line located in .spec.containers.command we will add the parameter --admission-control-config-file :
We will also map the new directory to the kube-apiserver pod. In the manifest file, find the .spec.volumes section and add a new mapping of the host directory to the pod:
Lastly, we save the kube-apiserver manifest file and wait for the pod to restart. To verify the new settings are in place, we will grep the running kube-apiserver pod manifest: Once we find the admission-control-config-file parameter in the output, we can continue.
For more information about the webhook authentication settings, please see the official documentation
Create the validating webhook configuration
Once the authentication is configured on the control plane, you can create the admission webhook configuration.
The configuration tells the API server for which objects and operations to call a specific validating webhook. The objects can be selected using multiple options. To see the complete list, please check the official documentation.
In this example, we will create two webhook configurations:
prevent running any workload created or modified in the namespaces labeled runecast-admission-policy=1 with critical severity vulnerabilities (policy id 1)
prevent running any workload created or modified in the namespaces labeled runecast-admission-policy=2 with critical or high severity vulnerabilities that have a fix available (policy id 2)
First we will define the rule to select the objects and operation:
namespaceSelector:
matchExpressions:
- values:
- '1'
operator: In
key: runecast-admission-policy
Now any namespaces that will be labeled runecast-admission-policy with id 1 or 2 will match.
Lastly, we need to define the webhook in the clientConfig field. We are running Runecast in the same K8s cluster and will refer to it using the service definition. If Runecast was running outside of the configured cluster, we would use the url definition to link to the webhook, but this is out of the scope of this article. Please check the integration examples in Runecast documentation.
Additionally, the Kubernetes API verifies the trust of the webhook certificate. In production, your K8s cluster (and thus the kube-apiserver pod) might already trust the Runecast certificate, either because it is issued by a public CA or your internal enterprise CA and the trust was established. In our case we issued a self-signed certificate for Runecast and will need to let the kube-apiserver trust it. The issuer of the certificate (which is the actual certificate for those self-signed) needs to be set as a base64 encoded string in the caBundle field under clientConfig :
...
clientConfig:
caBundle: LS0tLS1...
...
You can obtain the string from the secret by running the following command:
$ kubectl --namespace runecast get secret runecast-analyzer-nginx-cert -o jsonpath='{.data.tls\.crt}'
LS0tLS1...
Finally, we have the complete validating webhook configuration and we can apply it to the cluster:
Now, in the first namespace, we try to create a pod running the latest nginx image. The image has a number of critical severity vulnerabilities where fix is not available.
$ kubectl -n runecast-policy1 run nginx-latest --image=nginx:latest
Error from server: admission webhook "deny-critical.runecast.com" denied the request: Image scan in Runecast Analyzer found policy violations: (Rejected by policy 1: 'No critical vulnerabilities').
We were prevented from running the pod. Now we will try to create the same pod in the other namespace where critical vulnerabilities are allowed, if the fix is not available:
$ kubectl -n runecast-policy2 run nginx-latest --image=nginx:latest
pod/nginx-latest created
The pod was created. We can now try to create a nginx pod with an older image which will violate the policy id 2:
$ kubectl -n runecast-policy2 run nginx-1-19 --image=nginx:1.19
Error from server: admission webhook "deny-fixed-critical-and-medium.runecast.com" denied the request: Image scan in Runecast Analyzer found policy violations: (Rejected by policy 2: 'No critical or high severity vulnerabilities with available fix').
Again, we were prevented from running the pod.
To find out more about the image scan results we reach to Runecast and navigate to Image Scanning. The results of each admission are presented as a line in the list indicated by the Trigger type of K8s admission controller.
Click to view larger image ↑
Full list of CVEs listed in this patch
When we select a specific scan, a pop-up window will open with the scan details - evaluation result, policy ID, number of vulnerabilities, list of the vulnerabilities and other useful info.
Click to view larger image ↑
After clicking on a specific vulnerability, details are revealed.
Click to view larger image ↑
Summary
Using Runecast as an admissions controller allows you to feel sure that your workloads are secure from the very start. This, in turn, lowers the operational overhead for your Security and Operations teams.
Runecast is constantly innovating and adding new features to our platform. To keep up to date with what’s possible, follow us on twitter, or schedule a demo today.
Tomas is a Systems Engineer in the R&D team at Runecast Solutions focusing on Kubernetes, automation and content. In the past, Tomas held the roles of Infra Architect for a major Europe retailer and Systems Administrator designing and managing infrastructure for solutions delivered by a leading Czech traffic control and information centre provider.