×
🎓 Join the CrowdSec Academy: Level up on your cybersecurity knowledge
Start now
Tutorial

Integrating CrowdSec with Kubernetes using TLS

In this article, you will have the steps to install and configure: a Kubernetes cluster, an application to protect, a Traefik ingress object, a CrowdSec bouncer in the form of a Traefik plugin, a CrowdSec LAPI for the whole cluster and an agent for each cluster node.

This article is a follow-up to How to mitigate security threats with CrowdSec in Kubernetes using Traefik.

Since then, there has been a new CrowdSec release and new encryption features on the Helm chart.

Just like the previous article, we are going to install and configure:

  • a Kubernetes cluster
  • an application to protect
  • a Traefik ingress object
  • a CrowdSec bouncer in the form of a Traefik plugin
  • a CrowdSec LAPI for the whole cluster, and an agent for each cluster node

So, what's new?

This time, all internal communication is encrypted. Agents and bouncers are automatically registered at the first connection and don't require API keys or passwords to authenticate. Certificates are automatically re-generated before they expire. We also use a new bouncer plugin that runs in the Traefik process and does not require a separate pod.

If you don't want to do mutual TLS authentication, you can keep using user+password and the traffic will still be encrypted.

Let's begin.

Requirements

For development and testing, no need to waste resources. A kind cluster is enough, and we can easily simulate multiple nodes on a single machine.

$ helm repo add crowdsec https://crowdsecurity.github.io/helm-charts
$ helm repo add traefik https://helm.traefik.io/traefik
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo add emberstack https://emberstack.github.io/helm-charts
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "crowdsec" chart repository
...Successfully got an update from the "traefik" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "emberstack" chart repository
Update Complete. ⎈Happy Helming!⎈

Install the cluster

We create a cluster with at least two nodes. We deploy in the control plane and the regular applications in the worker nodes. CrowdSec must be installed in both: it needs to read the logs, which are local to each node by default.

Create and deploy kind.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 30000
    hostPort: 80
    protocol: TCP
  - containerPort: 30001
    hostPort: 443
    protocol: TCP
- role: worker
$ kind create cluster --config kind.yaml

The options here (config patches, port mappings) are required to plug in Traefik later. If you use something other than Traefik, please refer to the related Ingress documentation and our previous articles Kubernetes CrowdSec Integration – Part 1: Detection and Part 2: Remediation. New bouncers for other ingress solutions are outside the scope of this document but feel free to contact us to inquire or help.

Test that the cluster is running:

$ kubectl cluster-info                    
Kubernetes control plane is running at https://127.0.0.1:43299
CoreDNS is running at https://127.0.0.1:43299/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

And that we have two nodes:

$ kubectl get nodes     
NAME                 STATUS   ROLES           AGE     VERSION
kind-control-plane   Ready    control-plane   2m3s    v1.25.3
kind-worker          Ready              1m39s   v1.25.3

Install the test application

We deploy a web server that replies "helloworld !" for each request.

$ helm install \
    helloworld crowdsec/helloworld \
    --namespace default \
    --set ingress.enabled=true

Check that the pod and service are running on the worker node:

$ kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP           NODE          NOMINATED NODE   READINESS GATES
helloworld-6f4b44ddf9-l9dmc   1/1     Running   0          57s   10.244.1.2   kind-worker              
$ kubectl get service helloworld      
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
helloworld   ClusterIP   10.96.124.208           5678/TCP   2m17s

We use cert-manager to create the certificates and re-generate them before they expire. Everything is done with a private PKI, so no external communication is required for DNS or HTTP validation.

Cert-manager creates the secrets in the same namespace as the CrowdSec LAPI and agents, but the bouncer is running in the Traefik pod, so it has no access to them - which is a good thing and the whole point of having namespaces. We delegate this problem to the reflector. It takes care of copying and deleting secrets in the namespaces that require them.

You only need one copy of the cert-manager and reflector for the whole cluster.

If you don't have them already, install them with:

$ helm install \
    cert-manager jetstack/cert-manager \
    --create-namespace \
    --namespace cert-manager \
    --set installCRDs=true
$ helm install \
    reflector emberstack/reflector \
    --create-namespace \
    --namespace reflector

You can use the same namespace if you prefer.

Content of the certificate secrets

CrowdSec will ask cert-manager to create three secret objects for LAPI, agent, and bouncer. Each secret will contain three files:

  • tls.crt
  • tls.key
  • ca.crt

The last one, the CA bundle, is not required if you ask cert-manager to deploy certificates issued with letsencrypt or other root authorities. We don't explore that configuration here.

Installing Traefik Ingress

We are ready to install the Ingress and the bouncer plugin, which are ran in the same process in the Traefik pod. Since the plugin cannot start without a bouncer certificate, which in turn is created by CrowdSec+cert-manager, the whole Traefik pod will be on hold waiting for CrowdSec to be installed.

Here is traefik-values.yaml:

logs:
  access:
    enabled: true
service:
  type: NodePort
ports:
  traefik:
    expose: true
  web:
    nodePort: 30000
  websecure:
    nodePort: 30001
nodeSelector:
  ingress-ready: "true"
providers:
  kubernetesCRD:
    allowCrossNamespace: true
tolerations:
  - key: node-role.kubernetes.io/control-plane
    operator: Equal
    effect: NoSchedule
experimental:
  plugins:
    enabled: true
volumes:
  - name: crowdsec-bouncer-tls
    mountPath: /etc/traefik/crowdsec-certs/
    type: secret
additionalArguments:
  - "--experimental.plugins.bouncer.moduleName=github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin"
  - "--experimental.plugins.bouncer.version=v1.1.7"
  - "--entrypoints.web.http.middlewares=traefik-bouncer@kubernetescrd"
  - "--entrypoints.websecure.http.middlewares=traefik-bouncer@kubernetescrd"
  - "--providers.kubernetescrd"
$ helm install \
    traefik traefik/traefik \
    --create-namespace \
    --namespace traefik \
    -f traefik-values.yaml

You can see that the pod is on hold.

$ kubectl -n traefik describe pod
[...]
Warning  FailedMount  56s (x8 over 2m)  kubelet
MountVolume.SetUp failed for volume "crowdsec-bouncer-tls" : secret "crowdsec-bouncer-tls" not found

Installing CrowdSec

We need to deploy CrowdSec agents on all nodes, including the control plane, so we use the same toleration we already put on Traefik.

Here is crowdsec-values.yaml:

container_runtime: containerd
tls:
  enabled: true
  bouncer:
    reflector:
      namespaces: ["traefik"]
agent:
  tolerations:
    - key: node-role.kubernetes.io/control-plane
      operator: Equal
      effect: NoSchedule
  # Specify each pod whose logs you want to process
  acquisition:
    # The namespace where the pod is located
    - namespace: traefik
      # The pod name
      podName: traefik-*
      # as in crowdsec configuration, we need to specify the program name to find a matching parser
      program: traefik
  env:
    - name: PARSERS
      value: "crowdsecurity/cri-logs"
    - name: COLLECTIONS
      value: "crowdsecurity/traefik"
    # When testing, allow bans on private networks
    - name: DISABLE_PARSERS
      value: "crowdsecurity/whitelists"
  persistentVolume:
    config:
      enabled: false
lapi:
  dashboard:
    enabled: false
    ingress:
      host: dashboard.local
      enabled: true
  persistentVolume:
    config:
      enabled: false
  env:
    # For an internal test, disable the Online API
    - name: DISABLE_ONLINE_API
      value: "true"

In a production system, you'll want to keep the Online API and pass your enrollment key in the environment.

To authenticate the agents with user+password instead of client certificates, just set tls.agent.tlsClientAuth: false

Now install CrowdSec:

$ helm install \
    crowdsec crowdsec/crowdsec \
    --create-namespace \
    --namespace crowdsec \
    -f crowdsec-values.yaml

Verify that CrowdSec is running with a LAPI service and one agent for each pod:

$ kubectl -n crowdsec get pods
NAME                           READY   STATUS     RESTARTS   AGE
crowdsec-agent-btmjg           1/1     Running    0          34s
crowdsec-agent-s4752           1/1     Running    0          34s
crowdsec-lapi-599867bf-h6dd8   1/1     Running    0          34s

After a few seconds, the Traefik pod can mount the secret with the bouncer certificate and run.

$ kubectl -n traefik get pods
NAME                       READY   STATUS    RESTARTS   AGE
traefik-64bdd65b84-lqlkp   1/1     Running   0          99s

Traefik dashboard

Let's have a look at the Traefik control panel:

$ kubectl port-forward -n traefik $(kubectl -n traefik get pods -o jsonpath='{.items..metadata.name}') 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
$ open http://localhost:9000/dashboard/#/http/routers
Traefik Dashboard

Something is not right, because the bouncer plugin has been installed but not configured yet.

Bouncer configuration

Traefik expects a resource of "Middleware" type named "bouncer", which we will create now.

Here is bouncer-middleware.yaml:

apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: bouncer
  namespace: traefik
spec:
  plugin:
    bouncer:
      enabled: true
      crowdsecMode: none
      crowdsecLapiScheme: https
      crowdsecLapiHost: crowdsec-service.crowdsec:8080
      crowdsecLapiTLSCertificateAuthorityFile: /etc/traefik/crowdsec-certs/ca.crt
      crowdsecLapiTLSCertificateBouncerFile: /etc/traefik/crowdsec-certs/tls.crt
      crowdsecLapiTLSCertificateBouncerKeyFile: /etc/traefik/crowdsec-certs/tls.key

We are using crowdsecMode: none, because it works in real-time, but it queries the database for each connection. In production, we recommend stream for any substantial amount of traffic. For all the possible modes see the plugin's documentation.

$ kubectl apply -f bouncer-middleware.yaml

For more information, see Routing Configuration / Kind: Middleware

We can verify that there are no errors in the dashboard:

Traefik Dashboard

Final test

To test the helloworld service we need to set a hostname.

If you have Linux, add this to /etc/hosts (the IP is not the same for Windows or MacOS):

127.17.0.1  helloworld.local

And test the application with:

$ curl -f http://helloworld.local
helloworld !

Then, fake an attack:

$ nikto -host http://helloworld.local/
- Nikto v2.1.5
---------------------------------------------------------------------------
+ Target IP:          172.17.0.1
+ Target Hostname:    helloworld.local
+ Target Port:        80
+ Start Time:         2022-12-14 16:56:25 (GMT1)
---------------------------------------------------------------------------
[...]
---------------------------------------------------------------------------
+ 1 host(s) tested

We can now connect to any agent and verify that decisions have been taken:

Connect to any agent to verify that the decisions have been taken

There are 5 decisions because the attacker has been banned due to multiple scenarios. To remove the ban you would need to remove all the decisions, one by one.

You can verify that the attacker has been blocked:

$ curl -f http://helloworld.local
curl: (22) The requested URL returned error: 403

Conclusion

The last version of the Docker image lets you decide:

  • whether to use TLS or not
  •  use a private or a public PKI
  • use client TLS authentication or user/password for agents and bouncers

In the CrowdSec helm chart, this comes down to two main options:

  •  tls.enabled (use TLS)
  •  tls.agent.tlsAuth (use client TLS authentication)

By default, a private PKI infrastructure is created by cert-manager but you can customize its configuration or provide your own mechanism if you really need a public PKI for the LAPI.

We hope we have provided you with enough flexibility to protect your Kubernetes environment. We are aware that each cluster is different. The ease of deployment is a major design goal for us so don't hesitate to open an issue in our Github repository (https://github.com/crowdsecurity/crowdsec/issues) to let us hear and learn from your experience.

No items found.