v2.0.0
版本发布时间: 2022-11-01 05:25:27
cloudflare/ebpf_exporter最新发布版本:v2.4.2(2024-05-31 06:41:40)
ebpf_exporter
v2 is here!
This release comes with a bunch of breaking changes (all for the better!), so be sure to read the release notes below.
First and foremost, we migrated from BCC to libbpf. BCC has served us well over the years, but it has a major drawback that it compiles eBPF programs at runtime, which requires a compiler, kernel headers and has a chance of failing due to kernel discrepancies between hosts and kernel versions. It was hard to do static linking with bcc, so we ended up providing a binary linked against an older libc, for which you had to provide your own libbcc (which could also break due to unstable ABI).
With libbpf
all these problems go away:
- Programs (now called configs) are compiled in advance, and for each config you have an eBPF ELF object and a yaml config describing how to extract metrics out of it.
- Thanks to
libbpf
andCO-RE
you canCO
mpile once andR
unE
verywhere, worrying less about runtime failures. - It's easy to statically compile in
libbpf
, so we now provide a statically compiled binary that you can use anywhere with no dependencies. We also have aDockerfile
in the repo (not yet published on Docker Hub) if you're inclined to use that, and it's easier to run than ever.
Big thanks to @wenlxie for doing a bulk of the work on porting to libbpf
in #130. Another big thanks to @aquasecurity for their work on libbpfgo
, which made it a lot easier for us to switch.
In BCC
repo itself there's an effort to migrate programs from BCC
to libbpf
and you can see it here:
The programs above can be used as an inspiration to what can ebpf_exporter
provide for you as a metric.
Now to config changes. Previously you needed to make one big yaml config with all your metric descriptions and metrics intermingled. Now each logical program is called a config (a .yaml
file) and each config has a dedicated eBPF ELF object (a .bpf.o
file compiled from a .bpf.c
file). When you start ebpf_exporter
, you need to give it the path to the directory with your configs and tell it which configs to load. This allowed us to greatly flatten and simplify the configs and it allows you to have a simpler tooling configuring what ebpf_exporter
should enable.
Having eBPF C code in separate files also allows you to use your regular tooling to build eBPF ELF objects. In examples
directory you'd find a collection of our example configs along with a Makefile
to build eBPF code. The expectation is that you would replicate something similar for your internal configs, and you all the needed bits and pieces provided for you to copy and adapt. We provide vmlinux.h
for both x86_64
(aka amd64) and aarch64
(aka arm64).
Having separate .bpf.o
allows you to compile not just C code, but anything that would provide a valid eBPF ELF object. We tried with Rust, but unsuccessfully. Please feel free to send a PR if you have better luck with it. We still expect that majority of the people would use plain old C, since that's what libbpf mainly supports and has a lot of examples for.
Since programs for configs need to compiled in advance, we compile them as a part of CI job, allowing to spot mistakes early.
You no longer need to describe how to attach your eBPF programs in the config, it all happens in code. Take timers
code as an example:
SEC("tracepoint/timer/timer_start")
int do_count(struct trace_event_raw_timer_start* ctx)
We use libbpf
provided SEC
macro to tell what to attach to, which in this case is timer:timer_start
tracepoint. You can use any SEC
that libbpf
provides (there are many) and it should work out of the box, including uprobe
, usdt
and fentry
(the latter currently requires a kernel patch on aarch64
).
We piggyback on libbpf
for most of the stuff with SEC
, with the only exception being perf_event
. For that we have a custom handler allowing you to set type
, config
, and frequency
of the event you want to trace. Below is type=HARDWARE
, config=PERF_COUNT_HW_CACHE_MISSES
at 1Hz from llcstat
example:
SEC("perf_event/type=0,config=3,frequency=1")
int on_cache_miss(struct bpf_perf_event_data *ctx)
With uprobe
support we also provide a way for you to run some code when you program is attached:
SEC("uprobe//proc/self/exe:post_attach_mark")
int do_init()
There's post_attach_mark()
function in ebpf_exporter
that runs immediately after all configs are attached. In bpf-jit
example we use it to initialize a metric that would otherwise require a probe to run, which might be a while.
We now allow loose program attachment. If previously all programs had to be attached successfully for ebpf_exporter
to run, now we allow failures and export a metric whether each program was attached or not. This way you can use alerting to detect when this happens, while not sacrificing unrelated configs. This is handy if your programs attach to something that might be missing from some kernels, like a static
function that is sometimes not visible. We used it in our cachestat
example.
Speaking of metrics, if you have kernel.bpf_stats_enabled
sysctl enabled, we now also report how many times each of your eBPF programs ran and how long it spent running, which might be handy if you want to get an idea of how long things take.
In code and for the debug endpoint we renamed "tables" to "maps" to match eBPF terminology. If you were using /tables
for debugging, you should switch to /maps
. Previously configs needed to specify which table
metrics came from, now it's automatically inferred from the metric name itself.
We have updated our benchmark, which now includes fentry
, so you can see how much faster it is than good old kprobe
and how much overhead you should expect in general (it's not much).
All of these changes are reflected in README
, so if you start from scratch, you shouldn't worry. If you are currently using ebpf_exporter
v1, it will take some work to upgrade. The good news is that the metrics you export do not need to change. Internally at Cloudflare we upgraded without any issues.
You may have noticed that previously ebpf_exporter
took some time to start up due to the need to compile programs. Since this is no longer the case, you should expect much faster startup times now. For complex configs like biolatency
you should also expect lower memory usage (we observed ~250MiB -> ~30MiB drop during the upgrade).
If you need some documents getting up to speed with libbpf
and CO-RE
, here are three great blog posts from libbpf
maintainer @anakryiko:
- https://nakryiko.com/posts/bpf-portability-and-co-re/
- https://nakryiko.com/posts/bcc-to-libbpf-howto-guide/
- https://nakryiko.com/posts/bpf-core-reference-guide/
We hope you'll enjoy these changes. As usual, please let us know if you run into any issues.
1、 ebpf_exporter.aarch64 14.88MB
2、 ebpf_exporter.x86_64 15.82MB
3、 sha256sums.txt 175B