Security

Runtime Threat Detection with Falco

Advanced60 min to complete18 min read

Falco watches Linux syscalls in real time and alerts when containers do things they shouldn't — exec into a shell, write to /etc, open a sensitive file. This tutorial walks you through deploying Falco with the eBPF driver, understanding its default rules, writing custom rules for your environment, and routing alerts to Slack.

Before you begin

  • Kubernetes cluster (Linux nodes)
  • Helm and kubectl installed
  • A Slack webhook URL for alert routing
  • Basic familiarity with Linux syscalls helpful
Kubernetes
Falco
Security
Runtime Security
eBPF
Threat Detection

Admission controllers and Pod Security Admission stop known-bad configurations. They check the intent. Falco watches the behaviour — what your containers actually do at runtime.

An attacker who gets a foothold in a running pod will try to exec a shell, read credentials, make network calls, or write to the filesystem in ways that legitimate application code doesn't. RBAC won't catch this. Network policies won't catch this. Your image scanner definitely won't catch this, because the malicious behaviour happens after the image has already been admitted and the pod is running.

Falco sees all of this at the syscall level. It hooks into the kernel — via eBPF — and evaluates every interesting system call against a ruleset. When a match fires, you get an alert with full context: pod name, namespace, process name, command line, file descriptor, user. You can act on that within seconds.

This tutorial deploys Falco with the eBPF driver, tours the default ruleset, writes two custom rules targeting production workloads, routes alerts to Slack via falco-sidekick, and covers how to tune false positives without touching the default rules file.

What You'll Build

  • Falco deployed as a DaemonSet with the eBPF driver on all Linux nodes
  • Custom rule alerting on kubectl exec into pods in your production namespace
  • Custom rule alerting on writes to /etc inside containers, with a scoped exception for cert-manager
  • Alerts routed to a Slack channel via falco-sidekick
  • False-positive tuning using the exceptions block — without modifying default rules

Why eBPF, Not the Kernel Module

Falco supports two kernel instrumentation drivers: a traditional kernel module and an eBPF probe. The kernel module has been the default for years, but in practice it causes constant headaches.

The kernel module has to be loaded into the kernel, which means it needs to be compiled for the exact kernel version on each node. On GKE with Container-Optimized OS, on EKS with Bottlerocket, on any node with Secure Boot enabled — you cannot load an unsigned kernel module. You end up building and distributing a custom module or falling back to userspace fallback drivers with higher overhead and missed events.

The eBPF probe runs in a sandboxed virtual machine inside the kernel. It does not require module signing. It works on hardened OS images. Verification is built in — the kernel's eBPF verifier rejects unsafe programs before they run. The performance characteristics are comparable.

Use driver.kind=ebpf by default. There is no good reason to use the kernel module on a modern Kubernetes cluster.

Step 1: Deploy Falco with eBPF Driver

Add the Falco Helm repository and install:

bash
1helm repo add falcosecurity https://falcosecurity.github.io/charts
2helm repo update
3
4helm install falco falcosecurity/falco \
5  --namespace falco \
6  --create-namespace \
7  --set driver.kind=ebpf \
8  --set tty=true

The DaemonSet schedules one Falco pod on every node. The eBPF probe is loaded per-node during pod startup. Verify the rollout:

bash
kubectl get ds falco -n falco
kubectl get pods -n falco
kubectl logs -n falco ds/falco | head -30

You're looking for this in the logs:

Falco initialized. One or more rules loaded.

It will also print the count of loaded rules. On a fresh install with the default ruleset you should see something like Rules loaded: 86 (the exact number varies with the chart version). If you see driver errors or eBPF load failures, check that your kernel version is 4.14 or later — anything older and you're on your own.

Step 2: Tour the Default Rules

Before writing custom rules, understand what Falco already detects out of the box. List the active rules:

bash
kubectl exec -n falco ds/falco -- falco --list | head -50

The ones you need to know immediately:

Terminal shell in container — fires when a shell binary (bash, sh, zsh, fish, etc.) is executed inside a container. The condition is essentially evt.type = execve and container and proc.name in (shell_binaries). This is the rule that fires when someone runs kubectl exec -- /bin/sh.

Write below etc — fires on any write to a path under /etc inside a container. The underlying condition checks fd.name startswith /etc combined with an open-for-write syscall. Legitimate application code almost never writes to /etc.

Read sensitive file untrusted — fires when a process reads known sensitive paths (/etc/shadow, /etc/passwd, SSH private keys, service account tokens) and is not in the trusted process list. This catches credential harvesting.

Contact K8S API server from container — fires when a process inside a container makes a TCP connection to the Kubernetes API server address. Legitimate workloads that need API access use service accounts and declared permissions; unexpected connections here suggest lateral movement or enumeration.

The alert output format looks like this:

14:23:41.891530823: Warning A shell was spawned in a container with an attached terminal
(user=root user_loginuid=-1 k8s.ns=default k8s.pod=nginx-7d9b4dd9c-xkp9z
container=a3f8c2d1e4b5 shell=bash parent=runc cmdline=bash
terminal=34816 container_id=a3f8c2d1e4b5 image=nginx:1.25)

Every alert carries: timestamp, priority (Emergency/Alert/Critical/Error/Warning/Notice/Informational/Debug), rule name, and an output string populated with context variables. The context variables are the key — k8s.ns.name, k8s.pod.name, container.id, proc.cmdline, fd.name — they give you enough to act without having to dig.

Step 3: Trigger a Default Rule

Before writing anything custom, verify your Falco install is actually detecting events. Get a running pod in the default namespace:

bash
kubectl get pods -n default

Exec into it to trigger the "Terminal shell in container" rule:

bash
kubectl exec -it <pod-name> -n default -- /bin/sh

In a second terminal, watch Falco logs:

bash
kubectl logs -n falco ds/falco -f | grep "shell"

You should see output similar to:

14:31:05.112843001: Notice A shell was spawned in a container with an attached terminal
(user=ajeet user_loginuid=1000 k8s.ns=default k8s.pod=nginx-7d9b4dd9c-xkp9z
container=c9e1f2a3b4d5 shell=sh parent=kubectl cmdline=sh
terminal=34816 container_id=c9e1f2a3b4d5 image=nginx:1.25)

If you see this, Falco is working. The eBPF probe is intercepting syscalls and the default ruleset is evaluating them correctly.

Step 4: Write Custom Rule 1 — Alert on kubectl exec in Production

The default shell rule fires for any namespace. In development namespaces, developers exec into pods routinely — that's normal. In production, it should never happen unless there's an incident, and even then it should be logged and alerted.

Create a ConfigMap with the custom rule:

yaml
1apiVersion: v1
2kind: ConfigMap
3metadata:
4  name: falco-custom-rules
5  namespace: falco
6data:
7  custom-rules.yaml: |
8    - rule: Kubectl exec into production pod
9      desc: Detect when kubectl exec is used in the production namespace
10      condition: >
11        evt.type = execve and
12        container and
13        k8s.ns.name = "production" and
14        proc.name in (shell_binaries)
15      output: >
16        kubectl exec detected in production
17        (user=%user.name pod=%k8s.pod.name ns=%k8s.ns.name
18        shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)
19      priority: WARNING
20      tags: [production, shell, custom]

The condition re-uses the shell_binaries macro that Falco ships — you don't need to maintain that list yourself. The scoping to k8s.ns.name = "production" means this rule only fires for your production namespace; adjust the string to match your actual namespace name.

Apply the ConfigMap:

bash
kubectl apply -f custom-rules.yaml

Then tell Falco to load it by upgrading the Helm release with the customRules values key:

yaml
1# values-custom.yaml
2customRules:
3  custom-rules.yaml: |
4    - rule: Kubectl exec into production pod
5      desc: Detect when kubectl exec is used in the production namespace
6      condition: >
7        evt.type = execve and
8        container and
9        k8s.ns.name = "production" and
10        proc.name in (shell_binaries)
11      output: >
12        kubectl exec detected in production
13        (user=%user.name pod=%k8s.pod.name ns=%k8s.ns.name
14        shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)
15      priority: WARNING
16      tags: [production, shell, custom]
bash
helm upgrade falco falcosecurity/falco \
  --namespace falco \
  --reuse-values \
  -f values-custom.yaml

The customRules key mounts the content as additional rule files. The Falco pods will restart and load the new rules.

Step 5: Write Custom Rule 2 — Alert on Writes to /etc

Falco ships a "Write below etc" default rule, but its output doesn't include Kubernetes context by default in older chart versions, and it has no namespace-based exceptions. Here's a targeted version that adds k8s.pod.name to the output and explicitly excludes cert-manager — which legitimately writes TLS certificates to /etc/ssl:

Add this to your custom-rules.yaml under the same data key:

yaml
1    - rule: Write to /etc in container (custom)
2      desc: Alert on any write to /etc inside a container, excluding cert-manager
3      condition: >
4        evt.type in (open, openat, openat2) and
5        evt.is_open_write = true and
6        fd.name startswith /etc and
7        container and
8        not k8s.ns.name = "cert-manager"
9      output: >
10        Write to /etc detected
11        (file=%fd.name pod=%k8s.pod.name ns=%k8s.ns.name
12        proc=%proc.name cmdline=%proc.cmdline)
13      priority: ERROR
14      tags: [filesystem, etc, custom]

The evt.is_open_write field is a Falco-provided boolean that evaluates true when the open flags include O_WRONLY or O_RDWR. Checking all three open syscall variants (open, openat, openat2) ensures coverage across different glibc versions and container base images.

Priority ERROR here rather than WARNING — writes to /etc are a stronger signal than shell access, which has legitimate use cases. Tune priority to match your on-call policy.

Step 6: Route Alerts to Slack via falco-sidekick

By default, Falco writes alerts to stdout. In a Kubernetes cluster, those go into pod logs. That's fine if you're shipping logs to a central system, but if you want real-time alerting — not "someone looks at logs 20 minutes later" — you need falco-sidekick.

falco-sidekick is a companion deployment that receives Falco events over HTTP and forwards them to external destinations: Slack, PagerDuty, Datadog, Elasticsearch, OpsGenie, Loki, and more. Deploy it alongside Falco by enabling it in the same Helm release:

bash
helm upgrade falco falcosecurity/falco \
  --namespace falco \
  --reuse-values \
  --set falcosidekick.enabled=true \
  --set falcosidekick.config.slack.webhookurl="https://hooks.slack.com/services/YOUR/WEBHOOK/URL" \
  --set falcosidekick.config.slack.minimumpriority="warning"

Setting minimumpriority on the Slack output means only WARNING and above get sent to Slack. DEBUG, INFORMATIONAL, and NOTICE events still appear in Falco logs — they just don't page you.

Verify the sidekick pod is running and connected:

bash
kubectl get pods -n falco | grep sidekick
kubectl logs -n falco deploy/falco-falcosidekick | tail -20

You should see:

time="2026-04-24T14:31:00Z" level=info msg="Falco Sidekick is up and listening" outputs="[slack]"

Trigger a rule — exec into a pod — and confirm the Slack message arrives within a few seconds. The Slack notification includes the timestamp, priority level, rule name, and the full output string with all context variables. It's enough to identify the exact pod and process without additional investigation.

Step 7: Tune False Positives with exceptions

Once Falco is running in production, you will get false positives. Some legitimate workload will exec a shell during an init container, or a DaemonSet will write configuration to /etc during startup. The wrong response is to disable the rule. The right response is the exceptions block.

The exceptions block lets you carve out specific conditions from an existing rule without editing the rule itself. This is critical because custom edits to the default rules file get overwritten when you upgrade the Falco chart.

Here is how to exempt known-good shell usage in specific namespaces from the default "Terminal shell in container" rule:

yaml
1    - rule: Terminal shell in container
2      exceptions:
3        - name: known-shell-users
4          fields: [k8s.ns.name, proc.name]
5          comps: [=, in]
6          values:
7            - [kube-system, [bash]]
8            - [monitoring, [sh, bash]]

This overrides the default rule by appending an exceptions block. Falco merges rule definitions with the same name — the last loaded file wins for new fields, but exceptions are additive. The result: bash in kube-system and sh/bash in monitoring no longer fire the shell alert.

Keep exceptions as narrow as possible. I've seen teams add not container to suppress noisy rules — that exempts every container from the rule, which makes the rule completely useless. Scope by namespace first. If you need finer granularity, scope by k8s.pod.name or proc.name. If you're exempting a specific image, scope by container.image.repository.

Every exception you add is a blind spot. Document why it exists.

Verification

Run through this checklist after completing the setup:

bash
1# List active rules — confirm your custom rules appear
2kubectl exec -n falco ds/falco -- falco --list | grep -E "exec into production|Write to /etc"
3
4# Watch live alerts
5kubectl logs -n falco ds/falco -f
6
7# Confirm sidekick is forwarding to Slack
8kubectl logs -n falco deploy/falco-falcosidekick | grep -i "slack"
9
10# Trigger the production exec rule
11kubectl exec -it <production-pod> -n production -- /bin/sh
12# Watch for the alert in Falco logs and the Slack channel
13
14# Trigger the /etc write rule
15kubectl exec -it <any-pod> -- sh -c "echo test >> /etc/test-falco"
16# This should fire at ERROR priority

Production Considerations

Performance overhead. The eBPF probe adds roughly 1–3% CPU overhead per node under typical workloads. That number climbs under very high syscall rates — if you're running a node with thousands of container processes doing heavy I/O, measure before you deploy. The kernel module is marginally cheaper on CPU, but the compatibility tradeoffs outweigh that.

Rule maintenance. Falco ships rule updates independently of the chart via falcoctl. Running falcoctl artifact install falco-rules pulls the latest rules without a Helm upgrade and pod restart. Set up a CronJob or a pipeline step to do this regularly. Old rules miss new attack patterns.

Priority tuning. Set minimumpriority on each falco-sidekick output to match the urgency of that channel. Slack gets warning and above. PagerDuty gets error and above. Elasticsearch or S3 gets everything for compliance retention. Do not send debug and informational to Slack — you will mute the channel within a week.

Persistent alert storage. Slack history has retention limits and is not a compliance artifact. Forward alerts to Elasticsearch, OpenSearch, or S3 via falco-sidekick's output plugins. Keep at least 90 days of runtime security events for most compliance frameworks (SOC 2, ISO 27001). Some require 12 months.

Common Mistakes

1. Using the kernel module on hardened nodes. Container-Optimized OS, Bottlerocket, and Secure Boot-enabled nodes will reject an unsigned kernel module. Use driver.kind=ebpf. If you're on a very old kernel (pre-4.14), you may need driver.kind=modern_ebpf or driver.kind=gvisor — check the compatibility matrix.

2. Writing overly broad rules. Alerting on any exec in any container drowns you in noise from your own team's legitimate kubectl usage. Always scope custom rules to specific namespaces, workload types, or process names. A noisy rule is an ignored rule.

3. Not deploying falco-sidekick. Falco logs to stdout. If you're not actively watching those logs or shipping them somewhere with alerting, detections happen and nobody sees them. Sidekick costs nothing and takes five minutes to set up.

4. Modifying the default rules file directly. The default rules file lives inside the Falco container image and gets overwritten on every chart upgrade. Any change you make there disappears silently. Use the customRules Helm values key for new rules and the exceptions block for tuning existing ones.

5. Not testing rules before deploying. Falco supports a --dry-run mode with a synthetic event file. Write the event, run falco --dry-run -r your-rule.yaml < synthetic-event.json, and confirm the rule fires before shipping to production. A rule with a typo in the condition silently fails to match — it produces no error, just no alerts.

Cleanup

bash
helm uninstall falco -n falco
kubectl delete namespace falco
kubectl delete configmap falco-custom-rules -n falco 2>/dev/null || true

Official References

  • Falco Documentation — Official docs covering installation, rules language, fields, and the eBPF driver
  • Falco Rules Reference — Full reference for conditions, outputs, macros, lists, and the exceptions block
  • Falco Supported Fields — Complete list of every field available in Falco conditions and outputs (proc.*, fd.*, k8s.*, etc.)
  • falco-sidekick — Source, configuration reference, and supported output targets (Slack, PagerDuty, Elasticsearch, and 50+ others)
  • Falco Rules Updates with falcoctl — How to update Falco rules without redeploying using the falcoctl artifact management tool

We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.

Struggling with this in production?

We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.