This is the multi-page printable view of this section. Click here to print.
Installation and Configuration
- 1: Deploy on Kubernetes
- 2: Deploy as a container
- 3: Deploy with a package
- 4: Configure Runtime Hooks
- 5: Install tetra CLI
- 6: Verify installation
- 7: Configure Tetragon
- 8: Metrics
- 9: FAQ
1 - Deploy on Kubernetes
The recommended way to deploy Tetragon on a Kubernetes cluster is to use the Helm chart with Helm 3. Tetragon uses the helm.cilium.io repository to release the helm chart.
Install
To install the latest release of the Tetragon helm chart, use the following command.
--set KEY1=VALUE1,KEY2=VALUE2
.
helm repo add cilium https://helm.cilium.io
helm repo update
helm install tetragon cilium/tetragon -n kube-system
To wait until Tetragon deployment is ready, use the following kubectl
command:
kubectl rollout status -n kube-system ds/tetragon -w
Configuration
You can then make modifications to the Tetragon configuration using helm upgrade
, see the following example.
helm upgrade tetragon cilium/tetragon -n kube-system --set tetragon.grpc.address=localhost:1337
You can also edit the tetragon-config
ConfigMap directly and restart the
Tetragon daemonset with:
kubectl edit cm tetragon-config -n kube-system
kubectl rollout restart ds/tetragon -n kube-system
Upgrade
Upgrade Tetragon using a new specific version of the helm chart.
helm upgrade tetragon cilium/tetragon -n kube-system --version 0.9.0
Uninstall
Uninstall Tetragon using the following command.
helm uninstall tetragon -n kube-system
2 - Deploy as a container
Install
Stable versions
To run a stable version, please check Tetragon quay repository
and select which version you want. For example if you want to run the latest
version which is v1.2.0
currently.
docker run --name tetragon --rm -d \
--pid=host --cgroupns=host --privileged \
-v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf \
quay.io/cilium/tetragon:v1.2.0
Unstable-development versions
To run unstable development versions of Tetragon, use the
latest
tag from Tetragon-CI quay repository.
This will run the image that was built from the latest commit available on the
Tetragon main branch.
docker run --name tetragon --rm -d \
--pid=host --cgroupns=host --privileged \
-v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf \
quay.io/cilium/tetragon-ci:latest
Configuration
There are multiple ways to set configuration options:
-
Append Tetragon controlling settings at the end of the command
As an example set the file where to export JSON events with
--export-filename
argument:docker run --name tetragon --rm -d \ --pid=host --cgroupns=host --privileged \ -v /sys/kernel:/sys/kernel \ quay.io/cilium/tetragon:v1.2.0 \ /usr/bin/tetragon --export-filename /var/log/tetragon/tetragon.log
For a complete list of CLI arguments, please check Tetragon daemon configuration.
-
Environment variables
docker run --name tetragon --rm -d \ --pid=host --cgroupns=host --privileged \ --env "TETRAGON_EXPORT_FILENAME=/var/log/tetragon/tetragon.log" \ -v /sys/kernel:/sys/kernel \ quay.io/cilium/tetragon:v1.2.0
Every controlling setting can be set using environment variables. Prefix it with the key word
TETRAGON_
then upper case the controlling setting. As an example to set where to export JSON events:--export-filename
will beTETRAGON_EXPORT_FILENAME
.For a complete list of all controlling settings, please check tetragon daemon configuration.
-
Configuration files mounted as volumes
On the host machine set the configuration drop-ins inside
/etc/tetragon/tetragon.conf.d/
directory according to the configuration examples, then mount it as volume:docker run --name tetragon --rm -d \ --pid=host --cgroupns=host --privileged \ -v /sys/kernel:/sys/kernel \ -v /etc/tetragon/tetragon.conf.d/:/etc/tetragon/tetragon.conf.d/ \ quay.io/cilium/tetragon:v1.2.0
This will map the
/etc/tetragon/tetragon.conf.d/
drop-in directory from the host into the container.
See Tetragon daemon configuration reference for further details.
3 - Deploy with a package
Install
Tetragon will be managed as a systemd service. Tarballs are built and distributed along the assets in the releases.
-
First download the latest binary tarball, using
curl
for example to download theamd64
release:curl -LO https://github.com/cilium/tetragon/releases/download/v1.2.0/tetragon-v1.2.0-amd64.tar.gz
-
Extract the downloaded archive, and start the install script to install Tetragon. Feel free to inspect the script before starting it.
tar -xvf tetragon-v1.2.0-amd64.tar.gz cd tetragon-v1.2.0-amd64/ sudo ./install.sh
If Tetragon was successfully installed, the final output should be similar to:
Tetragon installed successfully!
-
Finally, you can check the Tetragon systemd service.
sudo systemctl status tetragon
The output should be similar to:
● tetragon.service - Tetragon eBPF-based Security Observability and Runtime Enforcement Loaded: loaded (/lib/systemd/system/tetragon.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2023-01-23 20:08:16 CET; 5s ago Docs: https://github.com/cilium/tetragon/ Main PID: 138819 (tetragon) Tasks: 17 (limit: 18985) Memory: 151.7M CPU: 913ms CGroup: /system.slice/tetragon.service └─138819 /usr/local/bin/tetragon
Configuration
The default Tetragon configuration shipped with the Tetragon package will be
installed in /usr/local/lib/tetragon/tetragon.conf.d/
. Local administrators
can change the configuration by adding drop-ins inside
/etc/tetragon/tetragon.conf.d/
to override the default settings or use the
command line flags. To restore default settings, remove any added configuration
inside /etc/tetragon/tetragon.conf.d/
.
See Tetragon daemon configuration for further details.
Upgrade
To upgrade Tetragon:
-
Download the new tarball.
curl -LO https://github.com/cilium/tetragon/releases/download/v1.2.0/tetragon-v1.2.0-amd64.tar.gz
-
Stop the Tetragon service.
sudo systemctl stop tetragon
-
Remove the old Tetragon version.
sudo rm -fr /usr/lib/systemd/system/tetragon.service sudo rm -fr /usr/local/bin/tetragon sudo rm -fr /usr/local/lib/tetragon/
-
Install the upgraded Tetragon version.
tar -xvf tetragon-v1.2.0-amd64.tar.gz cd tetragon-v1.2.0-amd64/ sudo ./install.sh
Uninstall
To completely remove Tetragon run the uninstall.sh
script that is provided
inside the tarball.
sudo ./uninstall.sh
Or remove it manually.
sudo systemctl stop tetragon
sudo systemctl disable tetragon
sudo rm -fr /usr/lib/systemd/system/tetragon.service
sudo systemctl daemon-reload
sudo rm -fr /usr/local/bin/tetragon
sudo rm -fr /usr/local/bin/tetra
sudo rm -fr /usr/local/lib/tetragon/
To just purge custom settings:
sudo rm -fr /etc/tetragon/
Operating
gRPC API access
unix:///var/run/tetragon/tetragon.sock
To access the gRPC API with tetra
client, set --server-address
to point to the corresponding address:
sudo tetra --server-address unix:///var/run/tetragon/tetragon.sock getevents
See restrict gRPC API access for further details.
Tetragon Events
By default JSON events are logged to /var/log/tetragon/tetragon.log
unless this location is changed.
Logs are always rotated into the same directory.
To read real-time JSON events, tailing the logs file is enough.
sudo tail -f /var/log/tetragon/tetragon.log
Tetragon also ships a gRPC client that can be used to receive events.
-
To print events in
json
format usingtetra
gRPC client:sudo tetra --server-address "unix:///var/run/tetragon/tetragon.sock" getevents
-
To print events in human compact format:
sudo tetra --server-address "unix:///var/run/tetragon/tetragon.sock" getevents -o compact
What’s next
See Explore security observability events to learn more about how to see the Tetragon events.
4 - Configure Runtime Hooks
See Tetragon Runtime Hooks, for an introduction to the topic.
Install Tetragon with Runtime Hooks
We use minikube
as the example platform because it supports both cri-o
and containerd
, but the
same steps can be applied to other platforms.
Setup Helm
helm repo add cilium https://helm.cilium.io
helm repo update
Setup cluster
minikube start --driver=kvm2 --container-runtime=cri-o
minikube start --driver=kvm2 --container-runtime=cri-o
Tetragon Runtime Hooks use NRI. NRI is enabled by default starting from containerd version 2.0. For version 1.7, however, it needs to be enabled in the configuration.
This requires a section such as:
[plugins."io.containerd.nri.v1.nri"]
disable = false
disable_connections = false
plugin_config_path = "/etc/nri/conf.d"
plugin_path = "/opt/nri/plugins"
plugin_registration_timeout = "5s"
plugin_request_timeout = "2s"
socket_path = "/var/run/nri/nri.sock"
To be present in containerd’s configuration (e.g., /etc/containerd/config.toml
).
You can use the tetragon-oci-hook-setup
to patch the configuration file:
minikube ssh cat /etc/containerd/config.toml > /tmp/old-config.toml
./contrib/tetragon-rthooks/tetragon-oci-hook-setup patch-containerd-conf enable-nri --config-file=/tmp/old-config.toml --output=/tmp/new-config.toml
diff -u /tmp/old-config.toml /tmp/new-config.toml
Output should be something like:
--- /tmp/old-config.toml 2024-07-02 11:51:23.893382357 +0200
+++ /tmp/new-config.toml 2024-07-02 11:51:52.841533035 +0200
@@ -67,3 +67,11 @@
mutation_threshold = 100
schedule_delay = "0s"
startup_delay = "100ms"
+ [plugins."io.containerd.nri.v1.nri"]
+ disable = false
+ disable_connections = false
+ plugin_config_path = "/etc/nri/conf.d"
+ plugin_path = "/opt/nri/plugins"
+ plugin_registration_timeout = "5s"
+ plugin_request_timeout = "2s"
+ socket_path = "/var/run/nri/nri.sock"
Install the new configuration file and restart containerd
minikube cp /tmp/new-config.toml /etc/containerd/config.toml
minikube ssh sudo systemctl restart containerd
Install Tetragon
helm install \
--namespace kube-system \
--set rthooks.enabled=true \
--set rthooks.interface=oci-hooks \
tetragon ./install/kubernetes/tetragon
helm install \
--namespace kube-system \
--set rthooks.enabled=true \
--set rthooks.interface=nri-hook \
tetragon ./install/kubernetes/tetragon
kubecl -n kube-system get pods | grep tetragon
With output similar to:
tetragon-hpjwq 2/2 Running 0 2m42s
tetragon-operator-664ddc8957-9lmd2 1/1 Running 0 2m42s
tetragon-rthooks-m24xr 1/1 Running 0 2m42s
Test Runtime Hooks
Start a pod:
kubectl run test --image=debian --rm -it -- /bin/bash
Check logs:
minikube ssh 'tail -1 /opt/tetragon/tetragon-oci-hook.log'
Output:
{"time":"2024-07-01T10:57:21.435689144Z","level":"INFO","msg":"hook request to agent succeeded","hook":"create-container","start-time":"2024-07-01T10:57:21.433755984Z","req-cgroups":"/kubepods/besteffort/podd4e74de2-0db8-4143-ae55-695b2489c727/crio-828977b42e3149b502b31708778d0c057efbce038af80d0882ed3e0cb0ff8796","req-rootdir":"/run/containers/storage/overlay-containers/828977b42e3149b502b31708778d0c057efbce038af80d0882ed3e0cb0ff8796/userdata","req-containerName":"test"}
Configuring Runtime Hooks installation
Installation directory (installDir
)
For tetragon runtime hooks to work, a binary (tetragon-oci-hook
) needs to be installed on the
host. Installation happens by the tetragon-rthooks
daemonset and the binary is installed in
/opt/tetragon
by default.
In some systems, however, the /opt
directory is mounted read-only. This will result in
errors such as:
Warning FailedMount 8s (x5 over 15s) kubelet MountVolume.SetUp failed for volume "oci-hook-install-path" : mkdir /opt/tetragon: read-only file system (6 results) [48/6775]
You can use the rthooks.installDir
helm variable to select a different location. For example:
--set rthooks.installDir=/run/tetragon
Failure check (failAllowNamespaces
)
By default, tetragon-oci-hook
logs information to /opt/tetragon/tetragon-oci-hook.log
.
Inspecting this file we get the following messages.
{"time":"2024-03-05T15:18:52.669044463Z","level":"WARN","msg":"hook request to the agent failed","hook":"create-container","start-time":"2024-03-05T15:18:42.667916779Z","req-cgroups":"/kubepods/besteffort/pod43ec7f32-3c9f-429f-a01c-fbaafff9f8e1/crio-1d18fd58f0879f6152a1c421f8f1e0987845394ee17001a16bee2df441c112f3","req-rootdir":"/run/containers/storage/overlay-containers/1d18fd58f0879f6152a1c421f8f1e0987845394ee17001a16bee2df441c112f3/userdata","err":"connecting to agent (context deadline exceeded) failed: unix:///var/run/cilium/tetragon/tetragon.sock"}
{"time":"2024-03-05T15:18:52.66912411Z","level":"INFO","msg":"failCheck determined that we should not fail this container, even if there was an error","hook":"create-container","start-time":"2024-03-05T15:18:42.667916779Z"}
{"time":"2024-03-05T15:18:53.01093915Z","level":"WARN","msg":"hook request to the agent failed","hook":"create-container","start-time":"2024-03-05T15:18:43.01005032Z","req-cgroups":"/kubepods/burstable/pod60f971e6-ac38-4aa0-b2d3-549333b2c803/crio-c0bf4e38bfa4ed5c58dd314d505f8b6a0f513d2f2de4dc4aa86a55c7c3e963ab","req-rootdir":"/run/containers/storage/overlay-containers/c0bf4e38bfa4ed5c58dd314d505f8b6a0f513d2f2de4dc4aa86a55c7c3e963ab/userdata","err":"connecting to agent (context deadline exceeded) failed: unix:///var/run/cilium/tetragon/tetragon.sock"}
{"time":"2024-03-05T15:18:53.010999098Z","level":"INFO","msg":"failCheck determined that we should not fail this container, even if there was an error","hook":"create-container","start-time":"2024-03-05T15:18:43.01005032Z"}
{"time":"2024-03-05T15:19:04.034580703Z","level":"WARN","msg":"hook request to the agent failed","hook":"create-container","start-time":"2024-03-05T15:18:54.033449685Z","req-cgroups":"/kubepods/besteffort/pod43ec7f32-3c9f-429f-a01c-fbaafff9f8e1/crio-d95e61f118557afdf3713362b9034231fee9bd7033fc8e7cc17d1efccac6f54f","req-rootdir":"/run/containers/storage/overlay-containers/d95e61f118557afdf3713362b9034231fee9bd7033fc8e7cc17d1efccac6f54f/userdata","err":"connecting to agent (context deadline exceeded) failed: unix:///var/run/cilium/tetragon/tetragon.sock"}
{"time":"2024-03-05T15:19:04.03463995Z","level":"INFO","msg":"failCheck determined that we should not fail this container, even if there was an error","hook":"create-container","start-time":"2024-03-05T15:18:54.033449685Z"}
To understand these messages, consider what tetragon-oci-hook
should do if it
cannot contact the Tetragon agent.
You may want to stop certain workloads from running. For other workloads (for example, the
tetragon pod itself) you probably want to do the opposite and let the them start. To this end,
tetragon-oci-hook
checks the container annotations, and by default does not fail a container if it
belongs in the same namespace as Tetragon. The previous messages concern the tetragon containers
(tetragon-operator
and tetragon
) and they indicate that the choice was made not to fail this
container from starting.
Furthermore, users may specify additional namespaces where the container will not fail if the
tetragon agent cannot be contacted via the rthooks.failAllowNamespaces
option.
For example:
rthooks:
enabled: true
failAllowNamespaces: namespace1,namespace2
5 - Install tetra CLI
This guide presents various methods to install tetra
in your environment.
Install the latest release
Autodetect your environment
This shell script autodetects the OS and the architecture, downloads the archive of the binary and its SHA 256 digest, compares that the actual digest with the supposed one, installs the binary, and removes the download artifacts.
curl(1)
, and the
sha256sum(1)
utilities. For Go, see how to install the latest Go release
and for the curl and checksum utility, it is usually distributed in common
Linux distribution but you can usually find them respectively under the package
curl
and coreutils
.
GOOS=$(go env GOOS)
GOARCH=$(go env GOARCH)
curl -L --remote-name-all https://github.com/cilium/tetragon/releases/latest/download/tetra-${GOOS}-${GOARCH}.tar.gz{,.sha256sum}
sha256sum --check tetra-${GOOS}-${GOARCH}.tar.gz.sha256sum
sudo tar -C /usr/local/bin -xzvf tetra-${GOOS}-${GOARCH}.tar.gz
rm tetra-${GOOS}-${GOARCH}.tar.gz{,.sha256sum}
Quick install for each environment
This installation method retrieves the adapted archived for your environment,
extract it and install it in the /usr/local/bin
directory.
curl(1)
that should be already
installed in your environment, otherwise you can usually find it under the
curl
package.
curl -L https://github.com/cilium/tetragon/releases/latest/download/tetra-linux-amd64.tar.gz | tar -xz
sudo mv tetra /usr/local/bin
curl -L https://github.com/cilium/tetragon/releases/latest/download/tetra-linux-arm64.tar.gz | tar -xz
sudo mv tetra /usr/local/bin
curl -L https://github.com/cilium/tetragon/releases/latest/download/tetra-darwin-amd64.tar.gz | tar -xz
sudo mv tetra /usr/local/bin
curl -L https://github.com/cilium/tetragon/releases/latest/download/tetra-darwin-arm64.tar.gz | tar -xz
sudo mv tetra /usr/local/bin
curl -LO https://github.com/cilium/tetragon/releases/latest/download/tetra-windows-amd64.tar.gz
tar -xzf tetra-windows-amd64.tar.gz
# move the binary in a directory in your PATH
curl -LO https://github.com/cilium/tetragon/releases/latest/download/tetra-windows-arm64.tar.gz
tar -xzf tetra-windows-arm64.tar.gz
# move the binary in a directory in your PATH
Install using homebrew
Homebrew is a package manager for macOS and Linux.
A formulae is available to
fetch precompiled binaries. You can also use it to build from sources (using the
--build-from-source
flag) with a Go dependency.
brew install tetra
Install a specific release
You can retrieve the release of tetra
along the release of Tetragon on GitHub
at the following URL: https://github.com/cilium/tetragon/releases.
To download a specific release you can use the following script, replacing the
OS
, ARCH
and TAG
values with your desired options.
OS=linux
ARCH=amd64
TAG=v0.9.0
curl -LO https://github.com/cilium/tetragon/releases/download/${TAG}/tetra-${OS}-${ARCH}.tar.gz | tar -xz
6 - Verify installation
Verify Tetragon image signature
Learn how to verify Tetragon container images signatures.
Prerequisites
You will need to install cosign.
Verify Signed Container Images
Since version 0.8.4, all Tetragon container images are signed using cosign.
Let’s verify a Tetragon image’s signature using the cosign verify
command:
cosign verify --certificate-github-workflow-repository cilium/tetragon --certificate-oidc-issuer https://token.actions.githubusercontent.com <Image URL> | jq
COSIGN_EXPERIMENTAL=1
environment variable to allow verification of images signed in KEYLESS mode.
To learn more about keyless signing, please refer to Sigstore documentation.
Verify the SBOM signature
Download and verify the signature of the software bill of materials
A Software Bill of Materials (SBOM) is a complete, formally structured list of components that are required to build a given piece of software. SBOM provides insight into the software supply chain and any potential concerns related to license compliance and security that might exist.
Starting with version 0.8.4, all Tetragon images include an SBOM. The SBOM is generated in SPDX format using the bom tool. If you are new to the concept of SBOM, see what an SBOM can do for you.
Download SBOM
The SBOM can be downloaded from the supplied Tetragon image using the cosign download sbom
command.
cosign download sbom --output-file sbom.spdx <Image URL>
Verify SBOM Image Signature
To ensure the SBOM is tamper-proof, its signature can be verified using the
cosign verify
command.
COSIGN_EXPERIMENTAL=1 cosign verify --certificate-github-workflow-repository cilium/tetragon --certificate-oidc-issuer https://token.actions.githubusercontent.com --attachment sbom <Image URL> | jq
It can be validated that the SBOM image was signed using Github Actions in the
Cilium repository from the Issuer
and Subject
fields of the output.
7 - Configure Tetragon
Depending on your deployment mode, Tetragon configuration can be changed by:
kubectl edit cm -n kube-system tetragon-config
# Change your configuration setting, save and exit
# Restart Tetragon daemonset
kubectl rollout restart -n kube-system ds/tetragon
# Change configuration inside /etc/tetragon/ then restart container.
# Example:
# 1. As a privileged user, write to the file /etc/tetragon/tetragon.conf.d/export-file
# the path where to export events, example "/var/log/tetragon/tetragon.log"
# 2. Bind mount host /etc/tetragon into container /etc/tetragon
# Tetragon events will be exported to /var/log/tetragon/tetragon.log
echo "/var/log/tetragon/tetragon.log" > /etc/tetragon/tetragon.conf.d/export-file
docker run --name tetragon --rm -d \
--pid=host --cgroupns=host --privileged \
-v /etc/tetragon:/etc/tetragon \
-v /sys/kernel:/sys/kernel \
-v /var/log/tetragon:/var/log/tetragon \
quay.io/cilium/tetragon:v1.2.0 \
/usr/bin/tetragon
# Change configuration inside /etc/tetragon/ then restart systemd service.
# Example:
# 1. As a privileged user, write to the file /etc/tetragon/tetragon.conf.d/export-file
# the path where to export events, example "/var/log/tetragon/tetragon.log"
# 2. Bind mount host /etc/tetragon into container /etc/tetragon
# Tetragon events will be exported to /var/log/tetragon/tetragon.log
echo "/var/log/tetragon/tetragon.log" > /etc/tetragon/tetragon.conf.d/export-file
systemctl restart tetragon
To read more about Tetragon configuration, please check our reference pages:
- For Kubernetes deployments, see the Helm chart reference.
- For Container or systemd deployments, see the Daemon configuration reference.
Enable Process Credentials
On Linux each process has various associated user, group IDs and capabilities
known as process credentials. To enable visility into process_credentials,
run Tetragon with enable-process-creds
setting set.
kubectl edit cm -n kube-system tetragon-config
# Change "enable-process-cred" from "false" to "true", then save and exit
# Restart Tetragon daemonset
kubectl rollout restart -n kube-system ds/tetragon
echo "true" > /etc/tetragon/tetragon.conf.d/enable-process-cred
docker run --name tetragon --rm -d \
--pid=host --cgroupns=host --privileged \
-v /etc/tetragon:/etc/tetragon \
-v /sys/kernel:/sys/kernel \
-v /var/log/tetragon:/var/log/tetragon \
quay.io/cilium/tetragon:v1.2.0 \
/usr/bin/tetragon
# Write to the drop-in file /etc/tetragon/tetragon.conf.d/enable-process-cred true
# Run the following as a privileged user then restart tetragon service
echo "true" > /etc/tetragon/tetragon.conf.d/enable-process-cred
systemctl restart tetragon
8 - Metrics
Tetragon exposes a number of Prometheus metrics that can be used for two main purposes:
- Monitoring the health of Tetragon itself
- Monitoring the activity of processes observed by Tetragon
For the full list, refer to metrics reference.
Enable/Disable Metrics
Kubernetes
In a Kubernetes installation, metrics are enabled by default and
exposed via the endpoint /metrics
. The tetragon
service exposes the Tetragon Agent metrics on port 2112
, and the
tetragon-operator-metrics
service the Tetragon Operator metrics on port 2113
.
You can change the port via Helm values:
tetragon:
prometheus:
port: 2222 # default is 2112
tetragonOperator:
prometheus:
port: 3333 # default is 2113
Or entirely disable the metrics server:
tetragon:
prometheus:
enabled: false # default is true
tetragonOperator:
prometheus:
enabled: false # default is true
Non-Kubernetes
In a non-Kubernetes installation, metrics are disabled by default. You can enable them by setting the metrics server
address of the Tetragon Agent to, for example, :2112
, via the --metrics-server
flag.
If using systemd, set the metrics-address
entry in a file under the
/etc/tetragon/tetragon.conf.d/
directory.
Verify that metrics are exposed
To verify that the metrics server has started, check the logs of the Tetragon components. Here’s an example for the Tetragon Agent, running on Kubernetes:
kubectl -n <tetragon-namespace> logs ds/tetragon
The logs should contain a line similar to the following:
time="2023-09-22T23:16:24+05:30" level=info msg="Starting metrics server" addr="localhost:2112"
To see what metrics are exposed, you can access the metrics endpoint directly. In Kubernetes, forward the metrics port:
kubectl -n <tetragon-namespace> port-forward svc/tetragon 2112:2112
Access localhost:2112/metrics
endpoint either in a browser or for example using curl
.
You should see a list of metrics similar to the following:
# HELP promhttp_metric_handler_errors_total Total number of internal errors encountered by the promhttp metric handler.
# TYPE promhttp_metric_handler_errors_total counter
promhttp_metric_handler_errors_total{cause="encoding"} 0
promhttp_metric_handler_errors_total{cause="gathering"} 0
# HELP tetragon_errors_total The total number of Tetragon errors. For internal use only.
# TYPE tetragon_errors_total counter
[...]
Configure labels on events metrics
Depending on the workloads running in the environment, Events Metrics may have very high cardinality. This is particularly likely in Kubernetes environments, where each pod creates a separate timeseries. To avoid overwhelming Prometheus, Tetragon provides an option to choose which labels are populated in these metrics.
You can configure the labels via Helm values or the --metrics-label-filter
flag. Set the value to a comma-separated
list of enabled labels:
tetragon:
prometheus:
metricsLabelFilter: "namespace,workload,binary" # "pod" label is disabled
Enable Prometheus ServiceMonitors
Typically, metrics are scraped by Prometheus or another compatible agent (for example OpenTelemetry Collector), stored in Prometheus or another compatible database, then queried and visualized for example using Grafana.
In Kubernetes, you can install Prometheus and Grafana using the Kube-Prometheus-Stack Helm chart. This Helm chart includes the Prometheus Operator, which allows you to configure Prometheus via Kubernetes custom resources.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
By default, the Prometheus Operator only discovers PodMonitors
and ServiceMonitors
within its namespace, that are
labeled with the same release tag as the prometheus-operator release.
Hence, you need to configure Prometheus to also scrape data from Tetragon’s ServiceMonitor
resources, which don’t
fulfill those conditions. This is configurable when installing the
Kube-Prometheus-Stack by setting the serviceMonitorSelectorNilUsesHelmValues
flag.
Refer to the official Kube-Prometheus-Stack documentation for more details.
Tetragon comes with default ServiceMonitor
resources containing the scrape confguration for the Agent and Operator.
You can enable it via Helm values:
tetragon:
prometheus:
serviceMonitor:
enabled: true
tetragonOperator:
prometheus:
serviceMonitor:
enabled: true
To ensure that Prometheus has detected the Tetragon metrics endpoints, you can check the Prometheus targets:
- Access the Prometheus UI.
- Navigate to the “Status” tab and select “Targets”.
- Verify that the Tetragon metric endpoints are listed and their status is
UP
.
9 - FAQ
What is the minimum Linux kernel version to run Tetragon?
Tetragon needs Linux kernel version 4.19 or greater.
We currently run tests on stable long-term support kernels 4.19, 5.4, 5.10, 5.15 and bpf-next, see this test workflow for up to date information. Not all Tetragon features work with older kernel versions. BPF evolves rapidly and we recommend you use the most recent stable kernel possible to get the most out of Tetragon’s features.
Note that Tetragon needs BTF support which might take some work on older kernels.
What are the Linux kernel configuration options needed to run Tetragon?
This is the list of needed configuration options, note that this might evolve quickly with new Tetragon features:
# CORE BPF
CONFIG_BPF
CONFIG_BPF_JIT
CONFIG_BPF_JIT_DEFAULT_ON
CONFIG_BPF_EVENTS
CONFIG_BPF_SYSCALL
CONFIG_HAVE_BPF_JIT
CONFIG_HAVE_EBPF_JIT
CONFIG_FTRACE_SYSCALLS
# BTF
CONFIG_DEBUG_INFO_BTF
CONFIG_DEBUG_INFO_BTF_MODULES
# Enforcement
CONFIG_BPF_KPROBE_OVERRIDE
# CGROUP and Process tracking
CONFIG_CGROUPS=y Control Group support
CONFIG_MEMCG=y Memory Control group
CONFIG_BLK_CGROUP=y Generic block IO controller
CONFIG_CGROUP_SCHED=y
CONFIG_CGROUP_PIDS=y Process Control group
CONFIG_CGROUP_FREEZER=y Freeze and unfreeze tasks controller
CONFIG_CPUSETS=y Manage CPUSETs
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_DEVICE=Y Devices Control group
CONFIG_CGROUP_CPUACCT=y CPU accouting controller
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_BPF=y Attach eBPF programs to a cgroup
CGROUP_FAVOR_DYNMODS=y (optional) >= 6.0
Reduces the latencies of dynamic cgroup modifications at the
cost of making hot path operations such as forks and exits
more expensive.
Platforms with frequent cgroup migrations could enable this
option as a potential alleviation for pod and containers
association issues.
At runtime, to probe if your kernel has sufficient features turned on, you can
run tetra
with root privileges with the probe
command:
sudo tetra probe
You can also run this command directly from the tetragon container image on a Kubernetes cluster node. For example:
kubectl run bpf-probe --image=quay.io/cilium/tetragon-ci:latest --privileged --restart=Never -it --rm --command -- tetra probe
The output should be similar to this (with boolean values depending on your actual configuration):
override_return: true
buildid: true
kprobe_multi: false
fmodret: true
fmodret_syscall: true
signal: true
large: true
Tetragon failed to start complaining about a missing BTF file
You might have encountered the following issues:
level=info msg="BTF discovery: default kernel btf file does not exist" btf-file=/sys/kernel/btf/vmlinux
level=info msg="BTF discovery: candidate btf file does not exist" btf-file=/var/lib/tetragon/metadata/vmlinux-5.15.49-linuxkit
level=info msg="BTF discovery: candidate btf file does not exist" btf-file=/var/lib/tetragon/btf
[...]
level=fatal msg="Failed to start tetragon" error="tetragon, aborting kernel autodiscovery failed: Kernel version \"5.15.49-linuxkit\" BTF search failed kernel is not included in supported list. Please check Tetragon requirements documentation, then use --btf option to specify BTF path and/or '--kernel' to specify kernel version"
Tetragon needs BTF (BPF Type Format) support to load its BPF programs using CO-RE (Compile Once - Run Everywhere). In brief, CO-RE is useful to load BPF programs that have been compiled on a different kernel version than the target kernel. Indeed, kernel structures change between versions and BPF programs need to access fields in them. So CO-RE uses the BTF file of the kernel in which you are loading the BPF program to know the differences between the struct and patch the fields offset in the accessed structures. CO-RE allows portability of the BPF programs but requires a kernel with BTF enabled.
Most of the common Linux distributions
now ship with BTF enabled and do not require any extra work, this is kernel
option CONFIG_DEBUG_INFO_BTF=y
. To check if BTF is enabled on your Linux
system and see the BTF data file of your kernel, the standard location is
/sys/kernel/btf/vmlinux
. By default, Tetragon will look for this file (this
is the first line in the log output above).
If your kernel does not support BTF you can:
- Retrieve the BTF file for your kernel version from an external source.
- Build the BTF file from your kernel debug symbols. You will need pahole to add BTF metadata to the debugging symbols and LLVM minimize the medata size.
- Rebuild your kernel with
CONFIG_DEBUG_INFO_BTF
toy
.
Tetragon will also look into /var/lib/tetragon/btf
for the vmlinux
file
(this is the third line in the log output above). Or you can use the --btf
flag to directly indicate Tetragon where to locate the file.
If you encounter this issue while using Docker Desktop on macOS, please refer to can I run Tetragon on Mac computers.
Can I install and use Tetragon in standalone mode (outside of k8s)?
Yes! Refer to the Container or Package installation guides.
Otherwise you can build Tetragon from source by running make
to generate standalone
binaries.
Make sure to take a look at the Development Setup
guide for the build requirements. Then use sudo ./tetragon --bpf-lib bpf/objs
to run Tetragon.
Can I run Tetragon on Mac computers?
Yes! You can run Tetragon locally by running a Linux virtual machine on your Mac.
On macOS running on amd64 (also known as Intel Mac) and arm64 (also know as Apple Silicon Mac), open source and commercial solutions exists to run virtual machines, here is a list of popular open source projects that you can use:
- Lima: Linux virtual machines: website lima-vm.io.
- UTM: Virtual machines for iOS and macOS: website mac.getutm.app.
- VirtualBox: website virtualbox.org (arm64 in developer preview).
You can use these solutions to run a recent Linux distribution that ships with BTF debug information support.
Please note that you need to use a recent Docker Desktop version on macOS (for example 24.0.6
with Kernel 6.4.16-linuxkit
), because the Linux
virtual machine provided by older Docker Desktop versions lacked support for the BTF debug information.
The BTF debug information file is needed for CO-RE
in order to load sensors of Tetragon. Run the following commands to see if Tetragon can be used on your Docker Desktop version:
# The Kernel needs to be compiled with CONFIG_DEBUG_INFO_BTF and
# CONFIG_DEBUG_INFO_BTF_MODULES support:
$ docker run -it --rm --privileged --pid=host ubuntu \
nsenter -t 1 -m -u -n -i sh -c \
'cat /proc/config.gz | gunzip | grep CONFIG_DEBUG_INFO_BTF'
CONFIG_DEBUG_INFO_BTF=y
CONFIG_DEBUG_INFO_BTF_MODULES=y
# "/sys/kernel/btf/vmlinux" should be present:
$ docker run -it --rm --privileged --pid=host ubuntu \
nsenter -t 1 -m -u -n -i sh -c 'ls -la /sys/kernel/btf/vmlinux'
-r--r--r-- 1 root root 4988627 Nov 21 20:33 /sys/kernel/btf/vmlinux