×
Sécurisation des entreprises françaises pendant Paris 2024 : Accédez à notre liste d'adresses IP !
Commencez
monitor audit logs to safeguard kubernetes
Tutorial

Monitor Audit Logs to Safeguard Your Kubernetes Infrastructure

The Kubernetes audit log datasource — introduced in CrowdSec Security Engine 1.5 — provides you with a webhook to receive events from the Kubernetes API server, helping you to analyze audit logs in real-time and take action against potential threats. 

The CrowdSec collection k8s-audit is capable of detecting various security issues within your Kubernetes environment. 

  • Identify anonymous access attempts to the Kubernetes API
  • Detect brute force attacks against the API server
  • Monitor for pod creations with host networking enabled
  • Identify unauthorized executions within pods (using kubectl)
  • Track pods that mount sensitive host folders
  • Flag the creation of privileged pods
  • Detect unauthorized requests originating from service accounts

By harnessing the power of Crowdsec and its new Kubernetes audit log datasource, you can proactively safeguard your Kubernetes infrastructure against these potential threats. Let's dive in and explore the steps to leverage this powerful combination of technologies.

What are Kubernetes audit logs?

Kubernetes audit logs are a record of events that occur within a Kubernetes cluster. These events include actions performed by users and applications, such as creating or deleting resources and changes to the configuration of the cluster itself. You can use audit logs to track activity within the cluster and identify potential security threats, as well as for compliance purposes. By enabling audit logs in Kubernetes, administrators can gain visibility into what's happening in their cluster and take action to prevent or mitigate security incidents.

Architecture

The diagram below shows the architecture we will be setting up in this tutorial. 

Prerequisites

To follow along with this tutorial, you will need the following tools installed:

Note: As the audit log feature is different from one Kubernetes distribution to another (for example, GKE, EKS, and AKS are using bridge/CloudWatch to access the audit logs), we decided that for the purposes of this tutorial, we will install CrowdSec externally to the Kubernetes cluster. However, it's also possible to configure the Kubernetes cluster to export logs to a file, then install the CrowdSec Security Engine inside the cluster using the Helm chart, and use the file datasource to ingest the logs.

Setting up the environment

Install and configure the Kubernetes cluster

Before creating the cluster using kind, we need to create the configuration files that will enable the audit log.

We can start by creating the first file audit-policy.yaml that will define the audit policies:


apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Only check access to resource "pods"
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status", "pods/exec"]
  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]
  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"
  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]
  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]
  # A catch-all rule to log all other requests at the Metadata level.
  - level: None
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
      

Next, we need to create the second file audit-webhook.yaml that will define the webhook configuration:


apiVersion: v1
kind: Config
clusters:
- name: kube-auditing
  cluster:
    server: http://172.17.0.1:9876/audit/webhook/event
contexts:
- context:
    cluster: kube-auditing
    user: ""
  name: default-context
current-context: default-context
preferences: {}
users: []

Now let’s create the kind config file kind.yaml that defines the cluster configuration:


kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  - |
    kind: ClusterConfiguration
    apiServer:
        # enable auditing flags on the API server
        extraArgs:
          audit-webhook-config-file: /etc/kubernetes/policies/audit-webhook.yaml
          audit-policy-file: /etc/kubernetes/policies/audit-policy.yaml
        # mount new files / directories on the control plane
        extraVolumes:
          - name: audit-policies
            hostPath: /etc/kubernetes/policies
            mountPath: /etc/kubernetes/policies
            readOnly: true
            pathType: DirectoryOrCreate
  extraPortMappings:
  - containerPort: 30000
    hostPort: 80
    protocol: TCP
  - containerPort: 30001
    hostPort: 443
    protocol: TCP
  extraMounts:
    - host_path: ./audit-policy.yaml
      container_path: /etc/kubernetes/policies/audit-policy.yaml
    - host_path: ./audit-webhook.yaml
      container_path: /etc/kubernetes/policies/audit-webhook.yaml
      

Finally, we can create the cluster using kind:


$ kind create cluster --config kind.yaml

Configuring the Crowdsec Security Engine

Once the cluster is created, we need to configure the CrowdSec Security Engine to receive logs from the Kubernetes cluster.

We start with the acquisition configuration file /etc/crowdsec/acquis.yaml:


---
source: k8s-audit
listen_addr: 0.0.0.0
listen_port: 9876
webhook_path: /audit/webhook/event
labels:
  type: k8s-audit
  

Here, we configure the new datasource.

Warning: In our example, we exposed the webhook on HTTP, but in a production environment, we suggest installing a reverse proxy in front of the webhook to use HTTPS and to add authentication.

Now we can install the k8s-audit collection:


$ cscli collections install k8s-audit

This collection contains the audit log parser and the scenarios to alert suspicious events and take decisions on some other events (like brute-forcing the API server).

For this example, we want to be alerted to suspicious events, so we will configure Slack notifications. To do so, we need to update the /etc/crowdsec/profiles.yaml file and add the following lines at the beginning:


name: k8s_audit_notification
filters:
  - Alert.Remediation == false && Alert.GetScenario() startsWith "crowdsecurity/k8s-audit"
notifications:
  - slack_k8s_audit
on_success: break

And set up the Slack notification in the /etc/crowdsec/notifications/slack.yaml file:


---
type: slack
name: slack_k8s_audit
log_level: info
format: |
  Kubernetes Security Alert:
  {{range . -}}
  {{$alert := . -}}
  {{- $resource_name := GetMeta $alert "resource_name" -}}
  {{- $resource_kind := GetMeta $alert "kind" -}}
   - Scenario: {{$alert.Scenario}}
   - Source IP: {{GetMeta $alert "source_ip"}}
   - User: {{GetMeta $alert "user"}}
   - Namespace: {{GetMeta $alert "namespace"}}
   {{- if $resource_name }}
   - Resource Name: {{GetMeta $alert "resource_name"}}
   {{- end -}}
   {{- if $resource_name }}
   - Resource Kind: {{GetMeta $alert "kind"}}
   {{- end }}
  ----
  {{end -}}
webhook: [SLACK_WEBHOOK_URL]

Note: You need to get the Slack webhook URL from your Slack workspace.

Now that we are all set with the CrowdSec Security Engine installation, we can restart the CrowdSec service:


$ systemctl restart crowdsec

To check that the Security Engine is running correctly, check the logs:


$ journalctl -u crowdsec

Here’s what you should see:


INFO[05-04-2023 15:34:34] Starting processing data                     
INFO[05-04-2023 15:34:34] Starting k8s-audit server on 0.0.0.0:9876/audit/webhook/event  type=k8s-audit

You can also check the metrics using cscli:


$ cscli metrics

Detection

Now that we have configured the cluster and the CrowdSec Security Engine, we can test the detection mechanism.

For that, we will create a new pod in the default namespace using this manifest debug-pod.yaml:


apiVersion: v1
kind: Pod
metadata:
  name: debug
  namespace: default
spec:
  containers:
  - command:
    - sleep
    - infinitely
    image: busybox
    name: digger
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /hostfs
      name: host
  volumes:
  - hostPath:
      path: /
    name: host
    

This pod will be created with the privileged flag, which is a security risk that will be triggered by audit logs and detected by the CrowdSec Security Engine.

We can apply this manifest using kubectl:


$ kubectl apply -f debug-pod.yaml

We will now get a shell in the pod:


$ kubectl exec -it debug -- /bin/sh

The CrowdSec logs, should now detect the suspicious events:


INFO[05-04-2023 16:45:56] Ip 172.18.0.1 performed 'crowdsecurity/k8s-audit-pod-host-path-volume' (1 events over 100ns) at 2023-04-05 14:45:56.57777534 +0000 UTC
INFO[05-04-2023 16:45:56] Ip 172.18.0.1 performed 'crowdsecurity/k8s-audit-privileged-pod-creation' (1 events over 100ns) at 2023-04-05 14:45:56.577789055 +0000 UTC
INFO[05-04-2023 16:45:57] (test) alert : crowdsecurity/k8s-audit-pod-host-path-volume by ip 172.18.0.1 (/0) 
INFO[05-04-2023 16:45:57] (test) alert : crowdsecurity/k8s-audit-privileged-pod-creation by ip 172.18.0.1 (/0)
INFO[05-04-2023 16:47:26] Ip 172.18.0.1 performed 'crowdsecurity/k8s-audit-pod-exec' (1 events over 101ns) at 2023-04-05 14:47:26.577906806 +0000 UTC 
INFO[05-04-2023 16:47:27] (test) alert : crowdsecurity/k8s-audit-pod-exec by ip 172.18.0.1 (/0)

As you can see, the Security Engine detected three of the scenarios included in the k8s-audit collection:

  • Pod creation mounting a sensitive host folder in the cluster
  • Privileged pod creation in the cluster
  • Executing a command in a pod

And here are our Slack notifications:

Wrapping up

In this tutorial, we explored the new Kubernetes audit log datasource in CrowdSec Security Engine 1.5 and how to use it to enhance the security of your Kubernetes workloads. By setting up a local Kubernetes cluster with audit logs enabled, configuring a webhook in CrowdSec, and installing the new datasource, you can monitor audit logs in real-time and take action against a number of potential threats.

In one of our upcoming tutorials, we will go deeper into this feature and explore how to configure the audit log for specific hosted K8s providers (GKE, EKS, AKS). In the meantime, check out the CrowdSec documentation and community resources for more information on Kubernetes audit log datasource in CrowdSec Security Engine 1.5. 

No items found.