
Secure Kubernetes Ingress with CrowdSec and Traefik: WAF, Virtual Patching, and DevSecOps at Scale
Why CrowdSec Is a Perfect Fit for Web-Native Kubernetes Workloads
Please note: In this article, the term “bouncer” refers to what our website and documentation call the Remediation Component.
Modern applications are born on the web: API driven, containerized, and often served through Kubernetes ingress layers. From microservices and dashboards to customer portals and APIs, almost every workload exposed through Kubernetes is based on HTTP. This provides flexibility and scalability, but it also creates a large, always-on attack surface for attackers to exploit.
CrowdSec Security Stack and its Web Application Firewall (WAF) align perfectly with this model. Running close to your ingress, they provide In-depth payload inspection of web traffic and context-aware protection without slowing down your delivery pipeline. Kubernetes provides the ideal substrate for this approach: a unified control plane for routing, certificates, and policy enforcement, where security can be automated, versioned, and distributed alongside your code.
For DevSecOps teams, the real challenge is not finding more security tools; it is fitting security into fast, automated delivery. Every new control competes with CI/CD speed, SLOs, and developer productivity. CrowdSec on Kubernetes treats security as code, running at the ingress and deployable through Helm and GitOps, providing consistent protection across services without requiring manual tickets or fragile, one-off configurations.
When deployed as middleware or as a plugin in your ingress controller (for example, Traefik), CrowdSec:
- Inspects every HTTP request at the edge before it reaches your workloads
- Blocks known bad actors in real time using CrowdSec’s collaborative threat intelligence
- Allows virtual patching as soon as new vulnerabilities emerge
- Extends consistent protection across all services, APIs, and frontends routed through your ingress
By integrating directly with Traefik access logs and the request flow, CrowdSec turns the Kubernetes ingress layer into a living, adaptive firewall that scales with your deployments and learns from the global community to stay ahead of attackers.
Operating CrowdSec on Kubernetes: Simple, Resilient, Production-Ready
Security at scale is not only about detection; it’s also about prevention. It is also about resilience. Your defenses must survive node restarts, rolling updates, and traffic spikes without gaps in coverage. Kubernetes gives CrowdSec this foundation, and CrowdSec Helm charts are now production-grade.
In a Kubernetes deployment, CrowdSec does not live as a fragile add-on. It is part of the fabric. Agents run on each node, automatically discovering new workloads and tailing relevant logs, while the Local API coordinates decisions, reputation data, and bouncer keys across the cluster.
A typical deployment flow looks like this:
1. Application backend (WordPress example)
# Service is internal - Traefik fronts it
service:
type: ClusterIP
updateStrategy:
type: Recreate
# Ingress configuration (Traefik + Let's Encrypt)
ingress:
enabled: true
ingressClassName: traefik
hostname: blog.crowdsec.net
tls: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.middlewares: wordpress-crowdsec@kubernetescrd
# Persistent storage
persistence:
storageClass: do-block-storage
# MariaDB bundled with WordPress (for simplicity)
mariadb:
primary:
persistence:
storageClass: do-block-storage
# Admin credentials
wordpressUsername: admin
wordpressPassword: ""
wordpressEmail: you@example.com
# Optional tuning
wordpressBlogName: "My Blog"
wordpressScheme: https
Key points:
ingress.enabled: true
Enables the Ingress resource, allowing external access to the WordPress application.ingress.ingressClassName: traefik
Tells Kubernetes that Traefik is the ingress controller for this Ingress.ingress.hostname: blog.crowdsec.net
Sets the hostname for accessing the WordPress blog.ingress.tls: true
Enables TLS for secure communication.cert-manager.io/cluster-issuer: letsencrypt-prod
Tells cert-manager to use theletsencrypt-prodClusterIssuer to provision and manage TLS certificates automatically.kubernetes.io/ingress.class: traefik
Another way to specify Traefik as the ingress controller.traefik.ingress.kubernetes.io/router.middlewares: wordpress-crowdsec@kubernetescrd
Critical annotation that applies the CrowdSec middleware to the WordPress Ingress, so CrowdSec security policies are enforced for traffic to this application.
2. TLS lifecycle with cert-manager
cert-manager handles the full TLS lifecycle and maintains your ingress’s trust over time.
To support HTTPS termination on the ingress, you need a ClusterIssuer (for example, letsencrypt-prod as shown above).
3. Traefik routing and logging for CrowdSec
Traefik routes traffic and produces access logs that CrowdSec can analyze.
deployment:
kind: DaemonSet
# mount it into the Traefik container as RW
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
spec:
externalTrafficPolicy: Local
# Make Traefik actually write a file your sidecar can tail
logs:
access:
enabled: true
format: json
fields:
defaultMode: keep
names:
ServiceName: keep
general:
level: INFO
format: json
# Proxy Protocol needs "enabled", not only trustedIPs
additionalArguments:
- --accesslog=true
- --accesslog.format=json
- --entrypoints.web.proxyProtocol=true
- --entrypoints.websecure.proxyProtocol=true
- --entrypoints.web.proxyProtocol.trustedIPs=10.0.0.0/8,192.168.0.0/16
- --entrypoints.websecure.proxyProtocol.trustedIPs=10.0.0.0/8,192.168.0.0/16
# =======================================================================
# Traefik CRDs (fixes)
# - Middleware lives in "traefik" ns
# - One backend service/port per route (drop the bogus :443 k8s Service)
# =======================================================================
experimental:
plugins:
crowdsec-bouncer-traefik-plugin:
moduleName: "github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin"
version: "v1.4.5"
providers:
kubernetesCRD:
enabled: true
kubernetesIngress:
enabled: true
It is essential to pay attention to externalTrafficPolicy and to the —entrypoints.web{,secure}.proxyProtocol.trustedIPs options. They are required if you want the real client IP address in the logs.
externalTrafficPolicy: Local
Instructs Kubernetes to send external traffic only to nodes that run a pod for the Service, without performing source NAT. This preserves the real client IP, which is essential for accurate logging, rate limiting, and security filtering in ingress controllers like Traefik. Nodes without a local pod will not receive traffic in this mode. To keep every node reachable by the LoadBalancer, Traefik should run as a DaemonSet, with one pod per node, so that every node can accept traffic while still maintaining the original client IP.--entrypoints.web{,secure}.proxyProtocol.trustedIPs
These options instruct Traefik to trust PROXY protocol headers only when they originate from specific, known sources, such as private network ranges or the public IP addresses of the LoadBalancer. This allows legitimate upstream proxies to forward the real client IP and prevents malicious clients from spoofing these headers.
Keep in mind that the LoadBalancer is often part of an MSP offer. If you need to add these options to an existing setup, the Load Balancer may need to be redeployed. Updating the Traefik configuration alone might not trigger a redeployment, so you need to handle that step explicitly.
The providers are also important.
kubernetesCRDis required to use theMiddlewareandIngressRouteresources.kubernetesIngresslets Traefik watch standard Ingress changes without having to roll out the deployment.
The experimental.plugins.crowdsec-bouncer-traefik-plugin section defines where to find the remediation component and which version to use. The plugin name must match the middleware spec.plugin name. This plugin configuration must be set in the Helm values.YAML file, as the plugin is loaded through the Traefik pod Helm template. Command-line arguments alone are not enough to initialize it correctly.
CrowdSec middleware configuration in Traefik:
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: crowdsec
namespace: wordpress
spec:
plugin:
crowdsec-bouncer-traefik-plugin:
enabled: true
crowdsecMode: stream
crowdsecLapiScheme: http
crowdsecLapiHost: crowdsec-service.crowdsec.svc.cluster.local:8080
crowdsecLapiKey:
htttTimeoutSeconds: 60
crowdsecAppsecEnabled: true
crowdsecAppsecHost: crowdsec-appsec-service.crowdsec.svc.cluster.local:7422
crowdsecAppsecFailureBlock: true
crowdsecAppsecUnreachableBlock: true
4. CrowdSec decision engine and bouncer
CrowdSec detects anomalies and enforces real-time bans through the bouncer plugin at cluster speed.
Here’s the CrowdSec configuration described through Helm’s values. You will find all references in our documentation here: https://docs.crowdsec.net/u/getting_started/installation/kubernetes
container_runtime: containerd
agent:
acquisition:
- namespace: traefik
podName: "*"
program: traefik
enabled: true
env:
- name: COLLECTIONS
value: "crowdsecurity/traefik" # crowdsecurity/appsec-virtual-patching crowdsecurity/appsec-generic-rules
appsec:
acquisitions:
- appsec_config: crowdsecurity/appsec-default
labels:
type: appsec
listen_addr: 0.0.0.0:7422
path: /
source: appsec
enabled: true
env:
- name: COLLECTIONS
value: crowdsecurity/appsec-virtual-patching crowdsecurity/appsec-generic-rules
config:
agent_config.local.yaml: ""
appsec_config.local.yaml: ""
capi_whitelists.yaml: ""
config.yaml.local: |
common:
log_level: debug
db_config:
api:
server:
auto_registration: # Activate if not using TLS for authentication
enabled: true
token: "${REGISTRATION_TOKEN}" # Do not modify this variable (auto generated and handled by the chart)
allowed_ranges: # Adapt to the pod IP ranges used by your cluster
- "127.0.0.1/32"
- "192.168.0.0/16"
- "10.0.0.0/8"
- "172.16.0.0/12"
lapi:
env:
- name: CONFIG_FILE
value: /etc/crowdsec/config.yaml
- name: BOUNCER_KEY_traefik
value: ""
- name: ENROLL_KEY
value: ""
- name: ENROLL_INSTANCE_NAME
value: "do-cluster"
- name: ENROLL_TAGS
value: "k8s linux test"
- name: DEBUG
value: "true"
Under the hood, Kubernetes orchestrates the reliability layer that keeps CrowdSec running smoothly. Configuration updates propagate instantly to all agents through ConfigMaps, keeping every component in sync. Secrets distribute credentials securely and consistently across the cluster, eliminating the need for manual key handling. When traffic increases, Kubernetes’ horizontal scaling automatically adds capacity, ensuring continuous protection.
No matter how large your environment becomes, CrowdSec remains uniformly deployed, self-healing, and consistently observable, with production-grade resilience and a straightforward deployment model.
Virtual Patching on Kubernetes Ingress with CrowdSec WAF
When a new vulnerability appears, the clock starts ticking. In traditional environments, you wait for vendor patches, rebuild containers, and redeploy applications. That can take hours or days, and during that window, your exposed services remain vulnerable.
Virtual patching removes this gap. Instead of modifying application code, you introduce a security rule at the perimeter, at the ingress, with a WAF layer. Exploit attempts are identified and blocked before they hit your application. You patch the behavior, not the binaries.
In CrowdSec, virtual patching is implemented through the AppSec module and its curated rule collections, such as crowdsecurity/appsec-virtual-patching. These rules detect suspicious payloads and exploit techniques (SQL injections, cross-site scripting, path traversal, remote code execution attempts, and more). As new attack vectors are discovered, the community and the CrowdSec team release new and updated rules that can be applied instantly across all deployments, with no downtime and no redeployment.
Once rules are deployed, you do not have to babysit them. You can focus on the next CVE that might impact your infrastructure. The CrowdSec teamtracks CVE publications and publishes a substantial number of virtual patches, ready for deployment. In many cases, the work is already done for you.
Imagine your public WordPress blog is running on Kubernetes, backed by Traefik. A new remote code execution vulnerability has been disclosed in one of your plugins, and a working exploit is now available on public code-sharing sites. In a traditional setup, you would wait for the plugin vendor to ship a fix, rebuild the image, test it, and redeploy all relevant pods. During that period, your exposed instance is a live target.
With CrowdSec AppSec and virtual patching enabled on the ingress, you can react differently. As soon as a virtual patch is available for that vulnerability class, you update the CrowdSec collection through Helm, let Kubernetes roll out the new configuration, and start blocking exploit payloads at the edge. The backend containers continue to run unchanged, but the attack traffic never reaches them. For a DevSecOps team, this fits into existing workflows: one small ruleset change in Git, one Helm release, and the ingress is protected.
In a Kubernetes environment, this blend of automation and adaptability fits naturally:
- Cluster-wide coverage: deployed as an ingress middleware or plugin, CrowdSec virtual patching instantly protects every application behind your ingress, across namespaces and microservices.
- Zero downtime updates: security rules reload dynamically, so you stay protected even while rolling out new versions of your applications.
- AppSec rules, scenarios, and collections are auditable and can be versioned like any other part of your code.
- Alignment with web-native architecture: Since most workloads are HTTP-based, virtual patching at the ingress stops attacks before they reach your pods.
- Community-driven intelligence: you benefit from threat intelligence built on the whole CrowdSec user network.
Together, virtual patching and this fire-and-forget model turn CrowdSec on Kubernetes into a self-updating defensive mesh that continuously shields your applications, scales with your cluster, and adapts faster than attackers.
Customizing CrowdSec for Your Kubernetes and DevSecOps Workflows
Each infrastructure is distinct, characterized by its own traffic patterns, custom applications, APIs, and business logic. CrowdSec is not a black box. It is an open, customizable security engine that adapts to your environment, rather than forcing you into a fixed rule set.
Across the stack, you can shape detection and protection logic, from HTTP parsing to behavioral correlation.
At the edge, the CrowdSec AppSec module lets you write your own web protection rules in YAML. You can define custom matchers, request attributes, and payload signatures that reflect how your application behaves. If you need to block unusual API payloads, restrict access to sensitive paths, or detect malformed headers specific to your stack, AppSec rules give you precise control. They can be deployed declaratively in Kubernetes through Helm values.
CrowdSec also supports scenarios, which are behavioral correlation descriptions that capture attack patterns across logs. Scenarios can track repeated failed logins, high request rates, or multi-step intrusions across containers or namespaces. Since they are written in YAML with a human-readable syntax, anyone in your DevSecOps team can create, test, and version them alongside the application code.
Consider a partner API that sends high-volume traffic to a specific path on your platform. The requests are legitimate, but they look unusual compared to regular customer traffic and trigger generic rate limit alerts. Instead of trying to hard-code exceptions into your services, you model this behavior directly in CrowdSec.
You create a small custom AppSec rule in YAML that matches the partner’s source ranges and expected paths, then tune a scenario that alerts only when the same pattern appears from unknown IP ranges. This configuration is stored in Git and shipped with your Helm values like any other piece of infrastructure code. After the next deployment, the partner API runs without noise, while similar traffic from untrusted origins is still detected and blocked. DevSecOps keeps control, avoids manual firewall exceptions, and expresses the business logic as code in one place.
With the Model Context Protocol (MCP), CrowdSec customization goes a step further. MCP enables AI-assisted rule generation, contextual enrichment, and dynamic configuration. You can connect CrowdSec to external knowledge sources, CI/CD pipelines, or observability tools, and then let it use that context to generate or adapt rules automatically. Instead of manually writing every pattern, you describe the intent, such as “block repeated API misuse from untrusted origins”, and the model can translate that into precise detection or AppSec logic.
Taken together, these features make CrowdSec one of the most customizable security engines in the cloud native world:
- Extend and tailor rules for your specific workloads and log formats.
- Version, test, and deploy rules seamlessly through Helm and GitOps.
- Correlate events across the cluster to identify complex or distributed behaviors.
- Integrate with CI/CD and monitoring pipelines as a first-class component.
In Kubernetes, this flexibility means you are never locked into a static security model. You can evolve your security posture at the same speed as your applications.
Unified Visibility: Using the CrowdSec Console for Kubernetes Security
The CrowdSec Console brings all of this flexibility and automation into a centralized, web-based control plane. It turns your distributed defense network into something you can see, understand, and act on.
Kubernetes and Helm handle deployment and scaling. The Console provides situational awareness, offering a clear view of what your security engine is doing across clusters, namespaces, and clients.
From the Console, you can:
- Visualize attacks in real time, blocked IPs, and active scenarios across your Kubernetes environment. Custom rules and scenarios are also visible and monitored.
- Correlate detections from different agents and AppSec modules to understand sources, frequency, and the nature of malicious behaviors.
- Demonstrate protection effectiveness with precise analytics that compare total traffic to blocked requests, which is especially useful for MSP, MSSP, and enterprise reporting.
The Console turns CrowdSec from a powerful engine into a fully managed, observable security platform. It is where DevSecOps teams monitor live defenses, refine rule sets, and align protection strategies with real threats hitting their web-native workloads.
Ready to see this in your own cluster?
Deploy CrowdSec via Helm on your Kubernetes ingress, connect it to the Console, and watch real-world attacks being blocked in minutes. From there, you can evolve your rules and AppSec logic alongside your DevSecOps workflows.


