Security
13 min readMay 1, 2026

Falco Runtime Security: Protecting Kubernetes Clusters from the Inside

Static analysis and admission control catch misconfigurations before deployment. Falco catches attacks after deployment — detecting privilege escalation, container escapes, credential theft, and anomalous behaviour at the system call level. Here's how Falco works, how to deploy it, and how to write rules that produce signal without noise.

CO
Coding Protocols Team
Platform Engineering
Falco Runtime Security: Protecting Kubernetes Clusters from the Inside

Admission webhooks, PSA, and Kyverno policies all operate at admission time — they can reject a badly configured pod before it runs, but they can't detect what a running pod does. A container that passes all admission checks can still be exploited: a vulnerability in the application code, a compromised dependency, a misconfigured RBAC binding that gets abused. Runtime security is the layer that watches what's actually happening inside running containers.

Falco (CNCF graduated) uses eBPF to intercept system calls from the Linux kernel and apply a rules engine to every syscall event. When a container tries to open /etc/shadow, spawn a shell, or make an outbound connection to an unusual IP, Falco fires an alert in real time. It doesn't prevent the action — it detects and reports it. Detection + response automation (via Falco Sidekick) is how you turn that signal into a security control.


Architecture

Falco has three layers:

  1. Data source (driver): Intercepts kernel events. Three options:

    • kmod — kernel module, compiled against your kernel version, requires privileged DaemonSet
    • ebpf — legacy eBPF probe, requires kernel 4.14+
    • modern_ebpf — CO-RE (Compile Once, Run Everywhere) eBPF, no kernel headers required, works on most Linux kernels 5.8+, production-ready since Falco 0.35; the Helm chart defaults to auto so you must explicitly set driver.kind=modern_ebpf to select it
  2. Rules engine: Processes events against a set of rules expressed in YAML. Each rule is a boolean expression over event fields (process name, container ID, file path, network socket, user, etc.).

  3. Outputs: Alerts are written to stdout, files, syslog, or pushed to Falco Sidekick for fan-out to Slack, PagerDuty, Datadog, AWS Lambda, and dozens of other destinations.


Installation

bash
1helm repo add falcosecurity https://falcosecurity.github.io/charts
2helm repo update
3
4helm install falco falcosecurity/falco \
5  --namespace falco \
6  --create-namespace \
7  --set driver.kind=modern_ebpf \
8  --set falcosidekick.enabled=true \
9  --set falcosidekick.webui.enabled=true \
10  --set "falcosidekick.config.slack.webhookurl=https://hooks.slack.com/services/XXXX" \
11  --set "falcosidekick.config.slack.minimumpriority=warning"

The modern_ebpf driver doesn't need a kernel module compilation step — it loads a pre-compiled BPF program. This is the recommended driver for Kubernetes: no host kernel headers needed, no driverkit setup, no privileged init container.

Verify Falco is running and the driver loaded:

bash
1kubectl get pods -n falco
2# NAME                READY   STATUS    RESTARTS   AGE
3# falco-xxxxx         2/2     Running   0          5m
4# falco-sidekick-xxx  1/1     Running   0          5m
5
6# Check driver status
7kubectl logs -n falco -l app.kubernetes.io/name=falco -c falco | grep "Starting"
8# Starting Falco with modern_ebpf driver

Rule Anatomy

yaml
1# A complete Falco rule:
2- rule: Shell Spawned in Container
3  desc: >
4    A shell process was spawned in a container. This may indicate
5    interactive access or exploitation of a running container.
6  condition: >
7    spawned_process
8    and container
9    and proc.name in (shell_binaries)
10    and not proc.pname in (shell_binaries)
11    and not container.image.repository in (trusted_shell_images)
12  output: >
13    Shell spawned in container
14    (user=%user.name user_loginuid=%user.loginuid
15    container_id=%container.id container_name=%container.name
16    image=%container.image.repository:%container.image.tag
17    shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline
18    terminal=%proc.tty)
19  priority: WARNING
20  tags: [container, shell, mitre_execution, T1059]

Key fields:

FieldDescription
conditionBoolean filter expression over event fields
outputAlert message with %field.name interpolations
priorityEMERGENCY, ALERT, CRITICAL, ERROR, WARNING, NOTICE, INFORMATIONAL, DEBUG
tagsFree-form tags; MITRE ATT&CK tags are convention for threat mapping

Macros are reusable filter fragments:

yaml
1- macro: container
2  condition: (container.id != host)
3
4- macro: spawned_process
5  condition: (evt.type = execve and evt.dir = <)
6
7- macro: spawned_shell_process
8  condition: (proc.name in (shell_binaries))
9  # Uses the shell_binaries list below; proc.name "in" requires a list, not a macro

Lists hold sets of values:

yaml
- list: shell_binaries
  items: [ash, bash, csh, ksh, sh, tcsh, zsh, dash]

- list: trusted_shell_images
  items: [debug-tools, busybox-debug]

Key Built-In Rules

Falco ships with a rich ruleset in falco_rules.yaml. The highest-value rules for Kubernetes environments:

Container escape / privilege escalation:

yaml
1# Mount of host filesystem
2- rule: Launch Privileged Container
3  condition: >
4    container_started
5    and container.privileged=true
6    and not trusted_images
7
8# Nsenter syscall (namespace switching — common escape technique)
9- rule: Change thread namespace
10  condition: >
11    evt.type = setns
12    and not proc.name in (runc, containerd-shim, calico-node)

Credential and secret access:

yaml
1# Reading sensitive files
2- rule: Read sensitive file untrusted
3  condition: >
4    open_read
5    and sensitive_files
6    and not proc.name in (trusted_procs)
7    and not container.image.repository in (trusted_images)
8  # sensitive_files macro covers: /etc/shadow, /etc/sudoers, /root/.ssh/*, etc.
9
10# Writing to /etc (common persistence technique)
11- rule: Write below etc
12  condition: >
13    open_write
14    and etc_dir
15    and not proc.name in (known_etc_writers)

Network anomalies:

yaml
1# Unexpected outbound connection (data exfiltration, C2 beaconing)
2- rule: Unexpected outbound connection destination
3  condition: >
4    outbound
5    and not expected_outbound_destination
6    and container
7
8# Netcat, socat, etc. — reverse shell tools
9- rule: Netcat Remote Code Execution in Container
10  condition: >
11    spawned_process
12    and container
13    and proc.name = nc
14    and (proc.args contains "-e" or proc.args contains "-c")

Kubernetes Audit Rules

Falco's k8s_audit plugin processes Kubernetes API server audit logs, enabling detection of suspicious cluster operations — not just container-level syscalls.

Configure the Kubernetes API server to send audit events to Falco via webhook:

yaml
1# audit-policy.yaml (on API server)
2apiVersion: audit.k8s.io/v1
3kind: Policy
4rules:
5  - level: RequestResponse
6    verbs: [create, update, delete, patch]
7    resources:
8      - group: ""
9        resources: [pods, secrets, configmaps, serviceaccounts]
10      - group: "rbac.authorization.k8s.io"
11        resources: [roles, clusterroles, rolebindings, clusterrolebindings]
12  - level: Metadata
13    verbs: [get, list, watch]
14    resources:
15      - group: ""
16        resources: [secrets]
yaml
1# API server webhook backend config
2apiVersion: v1
3kind: Config
4clusters:
5  - name: falco
6    cluster:
7      server: http://falco-service.falco.svc.cluster.local:9765/k8s-audit

Example audit rules:

yaml
1# Creating a pod with host path mounts
2- rule: Create Sensitive Mount Pod
3  desc: Detect pods created mounting sensitive host paths
4  condition: >
5    ka.verb = create
6    and ka.target.resource = pods
7    and ka.req.pod.volumes.hostpath intersects (/proc, /var/run/docker.sock, /run/containerd)
8  output: >
9    Sensitive mount pod created
10    (user=%ka.user.name pod=%ka.resp.name
11    ns=%ka.target.namespace mounts=%ka.req.pod.volumes.hostpath)
12  priority: WARNING
13  source: k8s_audit
14
15# Attaching to a running container (lateral movement indicator)
16- rule: Attach/Exec Pod
17  desc: An exec or attach was performed on a running pod
18  condition: >
19    ka.verb in (create)
20    and ka.uri.param[subresource] in (exec, attach)
21  output: >
22    Exec or attach to pod
23    (user=%ka.user.name pod=%ka.target.name ns=%ka.target.namespace cmd=%ka.uri.param[command])
24  priority: NOTICE
25  source: k8s_audit
26
27# RBAC privilege escalation
28- rule: ClusterRole With Wildcard
29  desc: Detect creation of ClusterRole with wildcard verbs or resources
30  condition: >
31    ka.verb in (create, update)
32    and ka.target.resource = clusterroles
33    and (ka.req.role.rules.resources intersects ["*"]
34      or ka.req.role.rules.verbs intersects ["*"])
35  output: >
36    ClusterRole with wildcard created
37    (user=%ka.user.name role=%ka.target.name rules=%ka.req.role.rules)
38  priority: WARNING
39  source: k8s_audit

Writing Custom Rules

Custom rules go in a falco_rules.local.yaml file that overrides or extends the default ruleset. Provide this file via a ConfigMap or Helm values:

yaml
1# values.yaml
2falco:
3  rulesFile:
4    - /etc/falco/falco_rules.yaml
5    - /etc/falco/falco_rules.local.yaml
6
7customRules:
8  custom-rules.yaml: |-
9    # Override: allow our debug container to spawn shells
10    - list: trusted_shell_images
11      items: [my-org/debug-tools]
12      override:
13        items: append
14
15    # Custom rule: detect access to our secrets namespace
16    - rule: Access to Production Secrets Namespace
17      desc: Sensitive access to production secrets
18      condition: >
19        ka.verb in (get, list)
20        and ka.target.resource = secrets
21        and ka.target.namespace = production
22        and ka.user.name != system:serviceaccount:production:my-app
23      output: >
24        Unexpected access to production secrets
25        (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace)
26      priority: ERROR
27      source: k8s_audit
28      tags: [secrets, production, compliance]

Rule tuning: New deployments typically generate many false positives. The workflow:

  1. Start with priority: INFORMATIONAL for new rules to observe without alarming
  2. Identify false-positive patterns using falco-driver-loader or Falco's --dry-run mode
  3. Add exceptions (list appends, macro overrides) to suppress known-good patterns
  4. Promote to WARNING/CRITICAL once the false positive rate is acceptable
  5. Connect to alerting via Falco Sidekick

Falco Sidekick: Alert Routing and Response

Falco Sidekick receives Falco events and fans them out to 60+ destinations. It also supports response actions — triggering Lambda functions or Kubernetes jobs to automatically respond to detections.

yaml
1# falcosidekick values
2falcosidekick:
3  enabled: true
4  config:
5    # Alerting
6    slack:
7      webhookurl: "https://hooks.slack.com/services/XXXX"
8      minimumpriority: "warning"
9      messageformat: "Falco [{priority}] {rule} — {output}"
10
11    pagerduty:
12      routingkey: "xxxx"
13      minimumpriority: "critical"
14
15    # Enrichment and SIEM
16    elasticsearch:
17      hostport: "http://elasticsearch:9200"
18      index: "falco-events"
19      minimumpriority: "notice"
20
21    # Response automation
22    aws_lambda:
23      functionname: "falco-responder"
24      minimumpriority: "critical"
25      # Lambda receives the full Falco event and can terminate pods,
26      # revoke credentials, or trigger investigation workflows

A Lambda responder that isolates a compromised pod by removing it from service:

python
1import boto3, json
2
3def lambda_handler(event, context):
4    falco_event = json.loads(event['body'])
5    if falco_event['rule'] == 'Shell Spawned in Container':
6        container_id = falco_event['output_fields']['container.id']
7        namespace = falco_event['output_fields']['k8s.ns.name']
8        pod_name = falco_event['output_fields']['k8s.pod.name']
9
10        # Label the pod for investigation (removes it from service endpoints)
11        k8s_client = get_k8s_client()
12        k8s_client.patch_namespaced_pod(
13            name=pod_name,
14            namespace=namespace,
15            body={"metadata": {"labels": {"quarantine": "true"}}}
16        )

Frequently Asked Questions

How does Falco compare to Tetragon?

Both use eBPF and target runtime security, but with different architectures. Tetragon (Cilium project) can enforce policy at the kernel level — it can actually block syscalls. Falco detects and alerts but doesn't block. Falco has a more mature rules ecosystem and broader Kubernetes integration (k8s_audit plugin, Sidekick). Teams running Cilium for networking often add Tetragon for enforcement alongside Falco for its richer alerting ecosystem. See Cilium eBPF Kubernetes Networking for Tetragon's enforcement model.

Will Falco impact application performance?

The modern eBPF driver (CO-RE) has very low overhead — typically under 1% CPU on most workloads because events are filtered in the kernel before being passed to userspace. The original kernel module had higher overhead under heavy syscall load (storage-intensive workloads, high-frequency I/O). If you're on a kernel 5.8+, use driver.kind=modern_ebpf.

How do I reduce false positives without disabling rules?

Use macros and list overrides to suppress known-good patterns without touching the core rule:

yaml
# Append to the trusted_binaries list rather than disabling the rule
- list: user_known_write_etc_tools
  items: [my-config-agent]
  override:
    items: append

The override approach means upstream rule updates (new detection logic) are still inherited — you're only modifying the scope of what's trusted.

Can Falco monitor non-containerised workloads?

Yes. Falco can monitor the host. The container.id != host condition filters events to containers only; remove it to monitor all processes. For Kubernetes-specific rules, Falco enriches events with pod/namespace metadata from the Kubernetes API — this enrichment isn't available for bare host processes.


For admission-time security controls that complement Falco's runtime detection, see Kubernetes Admission Webhooks: Validating and Mutating Workloads. For the broader security hardening posture that Falco slots into, see Kubernetes Security Hardening: A Production Checklist.

Setting up runtime security monitoring for a Kubernetes platform? Talk to us at Coding Protocols — we help platform teams design Falco deployments that produce actionable signal rather than alert fatigue.

Related Topics

Falco
Runtime Security
Kubernetes
eBPF
Security
CNCF
Threat Detection
Platform Engineering

Read Next