libbpf 升级到1.1版本
Signed-off-by: wenlong_12 <wenlong12@huawei.com> Signed-off-by: wenlong_12 <wenlong12@huawei.com>
@ -10,6 +10,11 @@ sphinx:
|
|||||||
builder: html
|
builder: html
|
||||||
configuration: docs/conf.py
|
configuration: docs/conf.py
|
||||||
|
|
||||||
|
formats:
|
||||||
|
- htmlzip
|
||||||
|
- pdf
|
||||||
|
- epub
|
||||||
|
|
||||||
# Optionally set the version of Python and requirements required to build your docs
|
# Optionally set the version of Python and requirements required to build your docs
|
||||||
python:
|
python:
|
||||||
version: 3.7
|
version: 3.7
|
||||||
|
@ -1 +1 @@
|
|||||||
fe68195daf34d5dddacd3f93dd3eafc4beca3a0e
|
54c3f1a81421f85e60ae2eaae7be3727a09916ee
|
||||||
|
2
BUILD.gn
@ -86,8 +86,6 @@ ohos_shared_library("libbpf") {
|
|||||||
"./src/str_error.h",
|
"./src/str_error.h",
|
||||||
"./src/strset.c",
|
"./src/strset.c",
|
||||||
"./src/strset.h",
|
"./src/strset.h",
|
||||||
"./src/xsk.c",
|
|
||||||
"./src/xsk.h",
|
|
||||||
]
|
]
|
||||||
configs = [ ":libbpf_config" ]
|
configs = [ ":libbpf_config" ]
|
||||||
public_configs = [ ":libbpf_public_config" ]
|
public_configs = [ ":libbpf_public_config" ]
|
||||||
|
@ -1 +1 @@
|
|||||||
dc37dc617fabfb1c3a16d49f5d8cc20e9e3608ca
|
7b43df6c6ec38c9097420902a1c8165c4b25bf70
|
||||||
|
@ -7,7 +7,7 @@ Usage-Guide:
|
|||||||
SPDX-License-Identifier: BSD-2-Clause
|
SPDX-License-Identifier: BSD-2-Clause
|
||||||
License-Text:
|
License-Text:
|
||||||
|
|
||||||
Copyright (c) <year> <owner> . All rights reserved.
|
Copyright (c) 2015 The Libbpf Authors. All rights reserved.
|
||||||
|
|
||||||
Redistribution and use in source and binary forms, with or without
|
Redistribution and use in source and binary forms, with or without
|
||||||
modification, are permitted provided that the following conditions are met:
|
modification, are permitted provided that the following conditions are met:
|
||||||
|
116
README.md
@ -1,17 +1,33 @@
|
|||||||
This is a mirror of [bpf-next Linux source
|
<picture>
|
||||||
tree](https://kernel.googlesource.com/pub/scm/linux/kernel/git/bpf/bpf-next)'s
|
<source media="(prefers-color-scheme: dark)" srcset="assets/libbpf-logo-sideways-darkbg.png" width="40%">
|
||||||
`tools/lib/bpf` directory plus its supporting header files.
|
<img src="assets/libbpf-logo-sideways.png" width="40%">
|
||||||
|
</picture>
|
||||||
|
|
||||||
All the gory details of syncing can be found in `scripts/sync-kernel.sh`
|
libbpf
|
||||||
script.
|
[![Github Actions Builds & Tests](https://github.com/libbpf/libbpf/actions/workflows/test.yml/badge.svg)](https://github.com/libbpf/libbpf/actions/workflows/test.yml)
|
||||||
|
[![Coverity](https://img.shields.io/coverity/scan/18195.svg)](https://scan.coverity.com/projects/libbpf)
|
||||||
|
[![CodeQL](https://github.com/libbpf/libbpf/workflows/CodeQL/badge.svg?branch=master)](https://github.com/libbpf/libbpf/actions?query=workflow%3ACodeQL+branch%3Amaster)
|
||||||
|
[![OSS-Fuzz Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/libbpf.svg)](https://oss-fuzz-build-logs.storage.googleapis.com/index.html#libbpf)
|
||||||
|
[![Read the Docs](https://readthedocs.org/projects/libbpf/badge/?version=latest)](https://libbpf.readthedocs.io/en/latest/)
|
||||||
|
======
|
||||||
|
|
||||||
Some header files in this repo (`include/linux/*.h`) are reduced versions of
|
**This is the official home of the libbpf library.**
|
||||||
their counterpart files at
|
|
||||||
[bpf-next](https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/)'s
|
|
||||||
`tools/include/linux/*.h` to make compilation successful.
|
|
||||||
|
|
||||||
BPF/libbpf usage and questions
|
*Please use this Github repository for building and packaging libbpf
|
||||||
==============================
|
and when using it in your projects through Git submodule.*
|
||||||
|
|
||||||
|
Libbpf *authoritative source code* is developed as part of [bpf-next Linux source
|
||||||
|
tree](https://kernel.googlesource.com/pub/scm/linux/kernel/git/bpf/bpf-next) under
|
||||||
|
`tools/lib/bpf` subdirectory and is periodically synced to Github. As such, all the
|
||||||
|
libbpf changes should be sent to [BPF mailing list](http://vger.kernel.org/vger-lists.html#bpf),
|
||||||
|
please don't open PRs here unless you are changing Github-specific parts of libbpf
|
||||||
|
(e.g., Github-specific Makefile).
|
||||||
|
|
||||||
|
Libbpf and general BPF usage questions
|
||||||
|
======================================
|
||||||
|
|
||||||
|
Libbpf documentation can be found [here](https://libbpf.readthedocs.io/en/latest/api.html).
|
||||||
|
It's an ongoing effort and has ways to go, but please take a look and consider contributing as well.
|
||||||
|
|
||||||
Please check out [libbpf-bootstrap](https://github.com/libbpf/libbpf-bootstrap)
|
Please check out [libbpf-bootstrap](https://github.com/libbpf/libbpf-bootstrap)
|
||||||
and [the companion blog post](https://nakryiko.com/posts/libbpf-bootstrap/) for
|
and [the companion blog post](https://nakryiko.com/posts/libbpf-bootstrap/) for
|
||||||
@ -36,12 +52,8 @@ to help you with whatever issue you have. This repository's PRs and issues
|
|||||||
should be opened only for dealing with issues pertaining to specific way this
|
should be opened only for dealing with issues pertaining to specific way this
|
||||||
libbpf mirror repo is set up and organized.
|
libbpf mirror repo is set up and organized.
|
||||||
|
|
||||||
Build
|
Building libbpf
|
||||||
[![Github Actions Builds & Tests](https://github.com/libbpf/libbpf/actions/workflows/test.yml/badge.svg)](https://github.com/libbpf/libbpf/actions/workflows/test.yml)
|
===============
|
||||||
[![Total alerts](https://img.shields.io/lgtm/alerts/g/libbpf/libbpf.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/libbpf/libbpf/alerts/)
|
|
||||||
[![Coverity](https://img.shields.io/coverity/scan/18195.svg)](https://scan.coverity.com/projects/libbpf)
|
|
||||||
[![OSS-Fuzz Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/libbpf.svg)](https://oss-fuzz-build-logs.storage.googleapis.com/index.html#libbpf)
|
|
||||||
=====
|
|
||||||
libelf is an internal dependency of libbpf and thus it is required to link
|
libelf is an internal dependency of libbpf and thus it is required to link
|
||||||
against and must be installed on the system for applications to work.
|
against and must be installed on the system for applications to work.
|
||||||
pkg-config is used by default to find libelf, and the program called can be
|
pkg-config is used by default to find libelf, and the program called can be
|
||||||
@ -73,34 +85,6 @@ $ cd src
|
|||||||
$ PKG_CONFIG_PATH=/build/root/lib64/pkgconfig DESTDIR=/build/root make install
|
$ PKG_CONFIG_PATH=/build/root/lib64/pkgconfig DESTDIR=/build/root make install
|
||||||
```
|
```
|
||||||
|
|
||||||
Distributions
|
|
||||||
=============
|
|
||||||
|
|
||||||
Distributions packaging libbpf from this mirror:
|
|
||||||
- [Fedora](https://src.fedoraproject.org/rpms/libbpf)
|
|
||||||
- [Gentoo](https://packages.gentoo.org/packages/dev-libs/libbpf)
|
|
||||||
- [Debian](https://packages.debian.org/source/sid/libbpf)
|
|
||||||
- [Arch](https://www.archlinux.org/packages/extra/x86_64/libbpf/)
|
|
||||||
- [Ubuntu](https://packages.ubuntu.com/source/impish/libbpf)
|
|
||||||
- [Alpine](https://pkgs.alpinelinux.org/packages?name=libbpf)
|
|
||||||
|
|
||||||
Benefits of packaging from the mirror over packaging from kernel sources:
|
|
||||||
- Consistent versioning across distributions.
|
|
||||||
- No ties to any specific kernel, transparent handling of older kernels.
|
|
||||||
Libbpf is designed to be kernel-agnostic and work across multitude of
|
|
||||||
kernel versions. It has built-in mechanisms to gracefully handle older
|
|
||||||
kernels, that are missing some of the features, by working around or
|
|
||||||
gracefully degrading functionality. Thus libbpf is not tied to a specific
|
|
||||||
kernel version and can/should be packaged and versioned independently.
|
|
||||||
- Continuous integration testing via
|
|
||||||
[TravisCI](https://travis-ci.org/libbpf/libbpf).
|
|
||||||
- Static code analysis via [LGTM](https://lgtm.com/projects/g/libbpf/libbpf)
|
|
||||||
and [Coverity](https://scan.coverity.com/projects/libbpf).
|
|
||||||
|
|
||||||
Package dependencies of libbpf, package names may vary across distros:
|
|
||||||
- zlib
|
|
||||||
- libelf
|
|
||||||
|
|
||||||
BPF CO-RE (Compile Once – Run Everywhere)
|
BPF CO-RE (Compile Once – Run Everywhere)
|
||||||
=========================================
|
=========================================
|
||||||
|
|
||||||
@ -154,6 +138,48 @@ use it:
|
|||||||
converting some more to both contribute to the BPF community and gain some
|
converting some more to both contribute to the BPF community and gain some
|
||||||
more experience with it.
|
more experience with it.
|
||||||
|
|
||||||
|
Distributions
|
||||||
|
=============
|
||||||
|
|
||||||
|
Distributions packaging libbpf from this mirror:
|
||||||
|
- [Fedora](https://src.fedoraproject.org/rpms/libbpf)
|
||||||
|
- [Gentoo](https://packages.gentoo.org/packages/dev-libs/libbpf)
|
||||||
|
- [Debian](https://packages.debian.org/source/sid/libbpf)
|
||||||
|
- [Arch](https://archlinux.org/packages/core/x86_64/libbpf/)
|
||||||
|
- [Ubuntu](https://packages.ubuntu.com/source/impish/libbpf)
|
||||||
|
- [Alpine](https://pkgs.alpinelinux.org/packages?name=libbpf)
|
||||||
|
|
||||||
|
Benefits of packaging from the mirror over packaging from kernel sources:
|
||||||
|
- Consistent versioning across distributions.
|
||||||
|
- No ties to any specific kernel, transparent handling of older kernels.
|
||||||
|
Libbpf is designed to be kernel-agnostic and work across multitude of
|
||||||
|
kernel versions. It has built-in mechanisms to gracefully handle older
|
||||||
|
kernels, that are missing some of the features, by working around or
|
||||||
|
gracefully degrading functionality. Thus libbpf is not tied to a specific
|
||||||
|
kernel version and can/should be packaged and versioned independently.
|
||||||
|
- Continuous integration testing via
|
||||||
|
[GitHub Actions](https://github.com/libbpf/libbpf/actions).
|
||||||
|
- Static code analysis via [LGTM](https://lgtm.com/projects/g/libbpf/libbpf)
|
||||||
|
and [Coverity](https://scan.coverity.com/projects/libbpf).
|
||||||
|
|
||||||
|
Package dependencies of libbpf, package names may vary across distros:
|
||||||
|
- zlib
|
||||||
|
- libelf
|
||||||
|
|
||||||
|
[![libbpf distro packaging status](https://repology.org/badge/vertical-allrepos/libbpf.svg)](https://repology.org/project/libbpf/versions)
|
||||||
|
|
||||||
|
|
||||||
|
bpf-next to Github sync
|
||||||
|
=======================
|
||||||
|
|
||||||
|
All the gory details of syncing can be found in `scripts/sync-kernel.sh`
|
||||||
|
script.
|
||||||
|
|
||||||
|
Some header files in this repo (`include/linux/*.h`) are reduced versions of
|
||||||
|
their counterpart files at
|
||||||
|
[bpf-next](https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/)'s
|
||||||
|
`tools/include/linux/*.h` to make compilation successful.
|
||||||
|
|
||||||
License
|
License
|
||||||
=======
|
=======
|
||||||
|
|
||||||
|
BIN
assets/libbpf-logo-compact-darkbg.png
Normal file
After Width: | Height: | Size: 262 KiB |
BIN
assets/libbpf-logo-compact-mono.png
Normal file
After Width: | Height: | Size: 128 KiB |
BIN
assets/libbpf-logo-compact.png
Normal file
After Width: | Height: | Size: 116 KiB |
BIN
assets/libbpf-logo-sideways-darkbg.png
Normal file
After Width: | Height: | Size: 284 KiB |
BIN
assets/libbpf-logo-sideways-mono.png
Normal file
After Width: | Height: | Size: 142 KiB |
BIN
assets/libbpf-logo-sideways.png
Normal file
After Width: | Height: | Size: 140 KiB |
BIN
assets/libbpf-logo-sparse-darkbg.png
Normal file
After Width: | Height: | Size: 352 KiB |
BIN
assets/libbpf-logo-sparse-mono.png
Normal file
After Width: | Height: | Size: 206 KiB |
BIN
assets/libbpf-logo-sparse.png
Normal file
After Width: | Height: | Size: 236 KiB |
36
travis-ci/managers/debian.sh → ci/managers/debian.sh
Executable file → Normal file
@ -6,8 +6,9 @@ CONT_NAME="${CONT_NAME:-libbpf-debian-$DEBIAN_RELEASE}"
|
|||||||
ENV_VARS="${ENV_VARS:-}"
|
ENV_VARS="${ENV_VARS:-}"
|
||||||
DOCKER_RUN="${DOCKER_RUN:-docker run}"
|
DOCKER_RUN="${DOCKER_RUN:-docker run}"
|
||||||
REPO_ROOT="${REPO_ROOT:-$PWD}"
|
REPO_ROOT="${REPO_ROOT:-$PWD}"
|
||||||
ADDITIONAL_DEPS=(clang pkg-config gcc-10)
|
ADDITIONAL_DEPS=(pkgconf)
|
||||||
CFLAGS="-g -O2 -Werror -Wall"
|
EXTRA_CFLAGS=""
|
||||||
|
EXTRA_LDFLAGS=""
|
||||||
|
|
||||||
function info() {
|
function info() {
|
||||||
echo -e "\033[33;1m$1\033[0m"
|
echo -e "\033[33;1m$1\033[0m"
|
||||||
@ -42,30 +43,35 @@ for phase in "${PHASES[@]}"; do
|
|||||||
docker_exec bash -c "echo deb-src http://deb.debian.org/debian $DEBIAN_RELEASE main >>/etc/apt/sources.list"
|
docker_exec bash -c "echo deb-src http://deb.debian.org/debian $DEBIAN_RELEASE main >>/etc/apt/sources.list"
|
||||||
docker_exec apt-get -y update
|
docker_exec apt-get -y update
|
||||||
docker_exec apt-get -y install aptitude
|
docker_exec apt-get -y install aptitude
|
||||||
docker_exec aptitude -y build-dep libelf-dev
|
docker_exec aptitude -y install make libz-dev libelf-dev
|
||||||
docker_exec aptitude -y install libelf-dev
|
|
||||||
docker_exec aptitude -y install "${ADDITIONAL_DEPS[@]}"
|
docker_exec aptitude -y install "${ADDITIONAL_DEPS[@]}"
|
||||||
echo -e "::endgroup::"
|
echo -e "::endgroup::"
|
||||||
;;
|
;;
|
||||||
RUN|RUN_CLANG|RUN_GCC10|RUN_ASAN|RUN_CLANG_ASAN|RUN_GCC10_ASAN)
|
RUN|RUN_CLANG|RUN_CLANG14|RUN_CLANG15|RUN_CLANG16|RUN_GCC10|RUN_GCC11|RUN_GCC12|RUN_ASAN|RUN_CLANG_ASAN|RUN_GCC10_ASAN)
|
||||||
CC="cc"
|
CC="cc"
|
||||||
if [[ "$phase" = *"CLANG"* ]]; then
|
if [[ "$phase" =~ "RUN_CLANG(\d+)(_ASAN)?" ]]; then
|
||||||
|
ENV_VARS="-e CC=clang-${BASH_REMATCH[1]} -e CXX=clang++-${BASH_REMATCH[1]}"
|
||||||
|
CC="clang-${BASH_REMATCH[1]}"
|
||||||
|
elif [[ "$phase" = *"CLANG"* ]]; then
|
||||||
ENV_VARS="-e CC=clang -e CXX=clang++"
|
ENV_VARS="-e CC=clang -e CXX=clang++"
|
||||||
CC="clang"
|
CC="clang"
|
||||||
elif [[ "$phase" = *"GCC10"* ]]; then
|
elif [[ "$phase" =~ "RUN_GCC(\d+)(_ASAN)?" ]]; then
|
||||||
ENV_VARS="-e CC=gcc-10 -e CXX=g++-10"
|
ENV_VARS="-e CC=gcc-${BASH_REMATCH[1]} -e CXX=g++-${BASH_REMATCH[1]}"
|
||||||
CC="gcc-10"
|
CC="gcc-${BASH_REMATCH[1]}"
|
||||||
CFLAGS="${CFLAGS} -Wno-stringop-truncation"
|
|
||||||
else
|
|
||||||
CFLAGS="${CFLAGS} -Wno-stringop-truncation"
|
|
||||||
fi
|
fi
|
||||||
if [[ "$phase" = *"ASAN"* ]]; then
|
if [[ "$phase" = *"ASAN"* ]]; then
|
||||||
CFLAGS="${CFLAGS} -fsanitize=address,undefined"
|
EXTRA_CFLAGS="${EXTRA_CFLAGS} -fsanitize=address,undefined"
|
||||||
|
EXTRA_LDFLAGS="${EXTRA_LDFLAGS} -fsanitize=address,undefined"
|
||||||
|
fi
|
||||||
|
if [[ "$CC" != "cc" ]]; then
|
||||||
|
docker_exec aptitude -y install "$CC"
|
||||||
|
else
|
||||||
|
docker_exec aptitude -y install gcc
|
||||||
fi
|
fi
|
||||||
docker_exec mkdir build install
|
docker_exec mkdir build install
|
||||||
docker_exec ${CC} --version
|
docker_exec ${CC} --version
|
||||||
info "build"
|
info "build"
|
||||||
docker_exec make -j$((4*$(nproc))) CFLAGS="${CFLAGS}" -C ./src -B OBJDIR=../build
|
docker_exec make -j$((4*$(nproc))) EXTRA_CFLAGS="${EXTRA_CFLAGS}" EXTRA_LDFLAGS="${EXTRA_LDFLAGS}" -C ./src -B OBJDIR=../build
|
||||||
info "ldd build/libbpf.so:"
|
info "ldd build/libbpf.so:"
|
||||||
docker_exec ldd build/libbpf.so
|
docker_exec ldd build/libbpf.so
|
||||||
if ! docker_exec ldd build/libbpf.so | grep -q libelf; then
|
if ! docker_exec ldd build/libbpf.so | grep -q libelf; then
|
||||||
@ -75,7 +81,7 @@ for phase in "${PHASES[@]}"; do
|
|||||||
info "install"
|
info "install"
|
||||||
docker_exec make -j$((4*$(nproc))) -C src OBJDIR=../build DESTDIR=../install install
|
docker_exec make -j$((4*$(nproc))) -C src OBJDIR=../build DESTDIR=../install install
|
||||||
info "link binary"
|
info "link binary"
|
||||||
docker_exec bash -c "CFLAGS=\"${CFLAGS}\" ./travis-ci/managers/test_compile.sh"
|
docker_exec bash -c "EXTRA_CFLAGS=\"${EXTRA_CFLAGS}\" EXTRA_LDFLAGS=\"${EXTRA_LDFLAGS}\" ./ci/managers/test_compile.sh"
|
||||||
;;
|
;;
|
||||||
CLEANUP)
|
CLEANUP)
|
||||||
info "Cleanup phase"
|
info "Cleanup phase"
|
15
ci/managers/test_compile.sh
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -euox pipefail
|
||||||
|
|
||||||
|
EXTRA_CFLAGS=${EXTRA_CFLAGS:-}
|
||||||
|
EXTRA_LDFLAGS=${EXTRA_LDFLAGS:-}
|
||||||
|
|
||||||
|
cat << EOF > main.c
|
||||||
|
#include <bpf/libbpf.h>
|
||||||
|
int main() {
|
||||||
|
return bpf_object__open(0) < 0;
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# static linking
|
||||||
|
${CC:-cc} ${EXTRA_CFLAGS} ${EXTRA_LDFLAGS} -o main -I./include/uapi -I./install/usr/include main.c ./build/libbpf.a -lelf -lz
|
7
travis-ci/managers/ubuntu.sh → ci/managers/ubuntu.sh
Executable file → Normal file
@ -10,14 +10,15 @@ source "$(dirname $0)/travis_wait.bash"
|
|||||||
|
|
||||||
cd $REPO_ROOT
|
cd $REPO_ROOT
|
||||||
|
|
||||||
CFLAGS="-g -O2 -Werror -Wall -fsanitize=address,undefined -Wno-stringop-truncation"
|
EXTRA_CFLAGS="-Werror -Wall -fsanitize=address,undefined"
|
||||||
|
EXTRA_LDFLAGS="-Werror -Wall -fsanitize=address,undefined"
|
||||||
mkdir build install
|
mkdir build install
|
||||||
cc --version
|
cc --version
|
||||||
make -j$((4*$(nproc))) CFLAGS="${CFLAGS}" -C ./src -B OBJDIR=../build
|
make -j$((4*$(nproc))) EXTRA_CFLAGS="${EXTRA_CFLAGS}" EXTRA_LDFLAGS="${EXTRA_LDFLAGS}" -C ./src -B OBJDIR=../build
|
||||||
ldd build/libbpf.so
|
ldd build/libbpf.so
|
||||||
if ! ldd build/libbpf.so | grep -q libelf; then
|
if ! ldd build/libbpf.so | grep -q libelf; then
|
||||||
echo "FAIL: No reference to libelf.so in libbpf.so!"
|
echo "FAIL: No reference to libelf.so in libbpf.so!"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
make -j$((4*$(nproc))) -C src OBJDIR=../build DESTDIR=../install install
|
make -j$((4*$(nproc))) -C src OBJDIR=../build DESTDIR=../install install
|
||||||
CFLAGS=${CFLAGS} $(dirname $0)/test_compile.sh
|
EXTRA_CFLAGS=${EXTRA_CFLAGS} EXTRA_LDFLAGS=${EXTRA_LDFLAGS} $(dirname $0)/test_compile.sh
|
@ -1,4 +1,4 @@
|
|||||||
attach_probe
|
# attach_probe
|
||||||
autoload
|
autoload
|
||||||
bpf_verif_scale
|
bpf_verif_scale
|
||||||
cgroup_attach_autodetach
|
cgroup_attach_autodetach
|
||||||
@ -10,7 +10,6 @@ core_reloc
|
|||||||
core_retro
|
core_retro
|
||||||
cpu_mask
|
cpu_mask
|
||||||
endian
|
endian
|
||||||
fexit_stress
|
|
||||||
get_branch_snapshot
|
get_branch_snapshot
|
||||||
get_stackid_cannot_attach
|
get_stackid_cannot_attach
|
||||||
global_data
|
global_data
|
||||||
@ -43,13 +42,13 @@ spinlock
|
|||||||
stacktrace_map
|
stacktrace_map
|
||||||
stacktrace_map_raw_tp
|
stacktrace_map_raw_tp
|
||||||
static_linked
|
static_linked
|
||||||
subprogs
|
|
||||||
task_fd_query_rawtp
|
task_fd_query_rawtp
|
||||||
task_fd_query_tp
|
task_fd_query_tp
|
||||||
tc_bpf
|
tc_bpf
|
||||||
tcp_estats
|
tcp_estats
|
||||||
tcp_rtt
|
tcp_rtt
|
||||||
tp_attach_query
|
tp_attach_query
|
||||||
|
usdt/urand_pid_attach
|
||||||
xdp
|
xdp
|
||||||
xdp_info
|
xdp_info
|
||||||
xdp_noinline
|
xdp_noinline
|
@ -1,5 +1,5 @@
|
|||||||
# This file is not used and is there for historic purposes only.
|
# This file is not used and is there for historic purposes only.
|
||||||
# See WHITELIST-5.5.0 instead.
|
# See ALLOWLIST-5.5.0 instead.
|
||||||
|
|
||||||
# PERMANENTLY DISABLED
|
# PERMANENTLY DISABLED
|
||||||
align # verifier output format changed
|
align # verifier output format changed
|
||||||
@ -71,6 +71,7 @@ sk_lookup # v5.9+
|
|||||||
sk_storage_tracing # missing bpf_sk_storage_get() helper
|
sk_storage_tracing # missing bpf_sk_storage_get() helper
|
||||||
skb_ctx # ctx_{size, }_{in, out} in BPF_PROG_TEST_RUN is missing
|
skb_ctx # ctx_{size, }_{in, out} in BPF_PROG_TEST_RUN is missing
|
||||||
skb_helpers # helpers added in 5.8+
|
skb_helpers # helpers added in 5.8+
|
||||||
|
skeleton # creates too big ARRAY map
|
||||||
snprintf # v5.13+
|
snprintf # v5.13+
|
||||||
snprintf_btf # v5.10+
|
snprintf_btf # v5.10+
|
||||||
sock_fields # v5.10+
|
sock_fields # v5.10+
|
0
ci/vmtest/configs/DENYLIST-latest
Normal file
3
ci/vmtest/configs/DENYLIST-latest.s390x
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
# TEMPORARY
|
||||||
|
usdt/basic # failing verifier due to bounds check after LLVM update
|
||||||
|
usdt/multispec # same as above
|
22
travis-ci/vmtest/helpers.sh → ci/vmtest/helpers.sh
Executable file → Normal file
@ -1,26 +1,20 @@
|
|||||||
|
# shellcheck shell=bash
|
||||||
|
|
||||||
# $1 - start or end
|
# $1 - start or end
|
||||||
# $2 - fold identifier, no spaces
|
# $2 - fold identifier, no spaces
|
||||||
# $3 - fold section description
|
# $3 - fold section description
|
||||||
travis_fold() {
|
foldable() {
|
||||||
local YELLOW='\033[1;33m'
|
local YELLOW='\033[1;33m'
|
||||||
local NOCOLOR='\033[0m'
|
local NOCOLOR='\033[0m'
|
||||||
if [ -z ${GITHUB_WORKFLOW+x} ]; then
|
if [ $1 = "start" ]; then
|
||||||
echo travis_fold:$1:$2
|
line="::group::$2"
|
||||||
if [ ! -z "${3:-}" ]; then
|
if [ ! -z "${3:-}" ]; then
|
||||||
echo -e "${YELLOW}$3${NOCOLOR}"
|
line="$line - ${YELLOW}$3${NOCOLOR}"
|
||||||
fi
|
fi
|
||||||
echo
|
|
||||||
else
|
else
|
||||||
if [ $1 = "start" ]; then
|
line="::endgroup::"
|
||||||
line="::group::$2"
|
|
||||||
if [ ! -z "${3:-}" ]; then
|
|
||||||
line="$line - ${YELLOW}$3${NOCOLOR}"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
line="::endgroup::"
|
|
||||||
fi
|
|
||||||
echo -e "$line"
|
|
||||||
fi
|
fi
|
||||||
|
echo -e "$line"
|
||||||
}
|
}
|
||||||
|
|
||||||
__print() {
|
__print() {
|
87
ci/vmtest/run_selftests.sh
Normal file
@ -0,0 +1,87 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
source $(cd $(dirname $0) && pwd)/helpers.sh
|
||||||
|
|
||||||
|
ARCH=$(uname -m)
|
||||||
|
|
||||||
|
STATUS_FILE=/exitstatus
|
||||||
|
|
||||||
|
read_lists() {
|
||||||
|
(for path in "$@"; do
|
||||||
|
if [[ -s "$path" ]]; then
|
||||||
|
cat "$path"
|
||||||
|
fi;
|
||||||
|
done) | cut -d'#' -f1 | tr -s ' \t\n' ','
|
||||||
|
}
|
||||||
|
|
||||||
|
test_progs() {
|
||||||
|
if [[ "${KERNEL}" != '4.9.0' ]]; then
|
||||||
|
foldable start test_progs "Testing test_progs"
|
||||||
|
# "&& true" does not change the return code (it is not executed
|
||||||
|
# if the Python script fails), but it prevents exiting on a
|
||||||
|
# failure due to the "set -e".
|
||||||
|
./test_progs ${DENYLIST:+-d$DENYLIST} ${ALLOWLIST:+-a$ALLOWLIST} && true
|
||||||
|
echo "test_progs:$?" >> "${STATUS_FILE}"
|
||||||
|
foldable end test_progs
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
test_progs_no_alu32() {
|
||||||
|
foldable start test_progs-no_alu32 "Testing test_progs-no_alu32"
|
||||||
|
./test_progs-no_alu32 ${DENYLIST:+-d$DENYLIST} ${ALLOWLIST:+-a$ALLOWLIST} && true
|
||||||
|
echo "test_progs-no_alu32:$?" >> "${STATUS_FILE}"
|
||||||
|
foldable end test_progs-no_alu32
|
||||||
|
}
|
||||||
|
|
||||||
|
test_maps() {
|
||||||
|
if [[ "${KERNEL}" == 'latest' ]]; then
|
||||||
|
foldable start test_maps "Testing test_maps"
|
||||||
|
./test_maps && true
|
||||||
|
echo "test_maps:$?" >> "${STATUS_FILE}"
|
||||||
|
foldable end test_maps
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
test_verifier() {
|
||||||
|
if [[ "${KERNEL}" == 'latest' ]]; then
|
||||||
|
foldable start test_verifier "Testing test_verifier"
|
||||||
|
./test_verifier && true
|
||||||
|
echo "test_verifier:$?" >> "${STATUS_FILE}"
|
||||||
|
foldable end test_verifier
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
foldable end vm_init
|
||||||
|
|
||||||
|
configs_path=/${PROJECT_NAME}/selftests/bpf
|
||||||
|
local_configs_path=${PROJECT_NAME}/vmtest/configs
|
||||||
|
DENYLIST=$(read_lists \
|
||||||
|
"$configs_path/DENYLIST" \
|
||||||
|
"$configs_path/DENYLIST.${ARCH}" \
|
||||||
|
"$local_configs_path/DENYLIST-${KERNEL}" \
|
||||||
|
"$local_configs_path/DENYLIST-${KERNEL}.${ARCH}" \
|
||||||
|
)
|
||||||
|
ALLOWLIST=$(read_lists \
|
||||||
|
"$configs_path/ALLOWLIST" \
|
||||||
|
"$configs_path/ALLOWLIST.${ARCH}" \
|
||||||
|
"$local_configs_path/ALLOWLIST-${KERNEL}" \
|
||||||
|
"$local_configs_path/ALLOWLIST-${KERNEL}.${ARCH}" \
|
||||||
|
)
|
||||||
|
|
||||||
|
echo "DENYLIST: ${DENYLIST}"
|
||||||
|
echo "ALLOWLIST: ${ALLOWLIST}"
|
||||||
|
|
||||||
|
cd ${PROJECT_NAME}/selftests/bpf
|
||||||
|
|
||||||
|
if [ $# -eq 0 ]; then
|
||||||
|
test_progs
|
||||||
|
test_progs_no_alu32
|
||||||
|
# test_maps
|
||||||
|
test_verifier
|
||||||
|
else
|
||||||
|
for test_name in "$@"; do
|
||||||
|
"${test_name}"
|
||||||
|
done
|
||||||
|
fi
|
@ -1,19 +1,21 @@
|
|||||||
.. SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
|
.. SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
|
||||||
|
|
||||||
|
.. _libbpf:
|
||||||
|
|
||||||
libbpf
|
libbpf
|
||||||
======
|
======
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
|
API Documentation <https://libbpf.readthedocs.io/en/latest/api.html>
|
||||||
|
program_types
|
||||||
libbpf_naming_convention
|
libbpf_naming_convention
|
||||||
libbpf_build
|
libbpf_build
|
||||||
|
|
||||||
This is documentation for libbpf, a userspace library for loading and
|
This is documentation for libbpf, a userspace library for loading and
|
||||||
interacting with bpf programs.
|
interacting with bpf programs.
|
||||||
|
|
||||||
For API documentation see the `versioned API documentation site <https://libbpf.readthedocs.io/en/latest/api.html>`_.
|
|
||||||
|
|
||||||
All general BPF questions, including kernel functionality, libbpf APIs and
|
All general BPF questions, including kernel functionality, libbpf APIs and
|
||||||
their application, should be sent to bpf@vger.kernel.org mailing list.
|
their application, should be sent to bpf@vger.kernel.org mailing list.
|
||||||
You can `subscribe <http://vger.kernel.org/vger-lists.html#bpf>`_ to the
|
You can `subscribe <http://vger.kernel.org/vger-lists.html#bpf>`_ to the
|
||||||
|
@ -9,8 +9,8 @@ described here. It's recommended to follow these conventions whenever a
|
|||||||
new function or type is added to keep libbpf API clean and consistent.
|
new function or type is added to keep libbpf API clean and consistent.
|
||||||
|
|
||||||
All types and functions provided by libbpf API should have one of the
|
All types and functions provided by libbpf API should have one of the
|
||||||
following prefixes: ``bpf_``, ``btf_``, ``libbpf_``, ``xsk_``,
|
following prefixes: ``bpf_``, ``btf_``, ``libbpf_``, ``btf_dump_``,
|
||||||
``btf_dump_``, ``ring_buffer_``, ``perf_buffer_``.
|
``ring_buffer_``, ``perf_buffer_``.
|
||||||
|
|
||||||
System call wrappers
|
System call wrappers
|
||||||
--------------------
|
--------------------
|
||||||
@ -59,15 +59,6 @@ Auxiliary functions and types that don't fit well in any of categories
|
|||||||
described above should have ``libbpf_`` prefix, e.g.
|
described above should have ``libbpf_`` prefix, e.g.
|
||||||
``libbpf_get_error`` or ``libbpf_prog_type_by_name``.
|
``libbpf_get_error`` or ``libbpf_prog_type_by_name``.
|
||||||
|
|
||||||
AF_XDP functions
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
AF_XDP functions should have an ``xsk_`` prefix, e.g.
|
|
||||||
``xsk_umem__get_data`` or ``xsk_umem__create``. The interface consists
|
|
||||||
of both low-level ring access functions and high-level configuration
|
|
||||||
functions. These can be mixed and matched. Note that these functions
|
|
||||||
are not reentrant for performance reasons.
|
|
||||||
|
|
||||||
ABI
|
ABI
|
||||||
---
|
---
|
||||||
|
|
||||||
|
203
docs/program_types.rst
Normal file
@ -0,0 +1,203 @@
|
|||||||
|
.. SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
|
||||||
|
|
||||||
|
.. _program_types_and_elf:
|
||||||
|
|
||||||
|
Program Types and ELF Sections
|
||||||
|
==============================
|
||||||
|
|
||||||
|
The table below lists the program types, their attach types where relevant and the ELF section
|
||||||
|
names supported by libbpf for them. The ELF section names follow these rules:
|
||||||
|
|
||||||
|
- ``type`` is an exact match, e.g. ``SEC("socket")``
|
||||||
|
- ``type+`` means it can be either exact ``SEC("type")`` or well-formed ``SEC("type/extras")``
|
||||||
|
with a '``/``' separator between ``type`` and ``extras``.
|
||||||
|
|
||||||
|
When ``extras`` are specified, they provide details of how to auto-attach the BPF program. The
|
||||||
|
format of ``extras`` depends on the program type, e.g. ``SEC("tracepoint/<category>/<name>")``
|
||||||
|
for tracepoints or ``SEC("usdt/<path>:<provider>:<name>")`` for USDT probes. The extras are
|
||||||
|
described in more detail in the footnotes.
|
||||||
|
|
||||||
|
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| Program Type | Attach Type | ELF Section Name | Sleepable |
|
||||||
|
+===========================================+========================================+==================================+===========+
|
||||||
|
| ``BPF_PROG_TYPE_CGROUP_DEVICE`` | ``BPF_CGROUP_DEVICE`` | ``cgroup/dev`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_CGROUP_SKB`` | | ``cgroup/skb`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET_EGRESS`` | ``cgroup_skb/egress`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET_INGRESS`` | ``cgroup_skb/ingress`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_CGROUP_SOCKOPT`` | ``BPF_CGROUP_GETSOCKOPT`` | ``cgroup/getsockopt`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_SETSOCKOPT`` | ``cgroup/setsockopt`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_CGROUP_SOCK_ADDR`` | ``BPF_CGROUP_INET4_BIND`` | ``cgroup/bind4`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET4_CONNECT`` | ``cgroup/connect4`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET4_GETPEERNAME`` | ``cgroup/getpeername4`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET4_GETSOCKNAME`` | ``cgroup/getsockname4`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET6_BIND`` | ``cgroup/bind6`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET6_CONNECT`` | ``cgroup/connect6`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET6_GETPEERNAME`` | ``cgroup/getpeername6`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET6_GETSOCKNAME`` | ``cgroup/getsockname6`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_UDP4_RECVMSG`` | ``cgroup/recvmsg4`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_UDP4_SENDMSG`` | ``cgroup/sendmsg4`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_UDP6_RECVMSG`` | ``cgroup/recvmsg6`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_UDP6_SENDMSG`` | ``cgroup/sendmsg6`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_CGROUP_SOCK`` | ``BPF_CGROUP_INET4_POST_BIND`` | ``cgroup/post_bind4`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET6_POST_BIND`` | ``cgroup/post_bind6`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET_SOCK_CREATE`` | ``cgroup/sock_create`` | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``cgroup/sock`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_CGROUP_INET_SOCK_RELEASE`` | ``cgroup/sock_release`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_CGROUP_SYSCTL`` | ``BPF_CGROUP_SYSCTL`` | ``cgroup/sysctl`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_EXT`` | | ``freplace+`` [#fentry]_ | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_FLOW_DISSECTOR`` | ``BPF_FLOW_DISSECTOR`` | ``flow_dissector`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_KPROBE`` | | ``kprobe+`` [#kprobe]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``kretprobe+`` [#kprobe]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``ksyscall+`` [#ksyscall]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``kretsyscall+`` [#ksyscall]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``uprobe+`` [#uprobe]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``uprobe.s+`` [#uprobe]_ | Yes |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``uretprobe+`` [#uprobe]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``uretprobe.s+`` [#uprobe]_ | Yes |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``usdt+`` [#usdt]_ | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_TRACE_KPROBE_MULTI`` | ``kprobe.multi+`` [#kpmulti]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``kretprobe.multi+`` [#kpmulti]_ | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_LIRC_MODE2`` | ``BPF_LIRC_MODE2`` | ``lirc_mode2`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_LSM`` | ``BPF_LSM_CGROUP`` | ``lsm_cgroup+`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_LSM_MAC`` | ``lsm+`` [#lsm]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``lsm.s+`` [#lsm]_ | Yes |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_LWT_IN`` | | ``lwt_in`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_LWT_OUT`` | | ``lwt_out`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_LWT_SEG6LOCAL`` | | ``lwt_seg6local`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_LWT_XMIT`` | | ``lwt_xmit`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_PERF_EVENT`` | | ``perf_event`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE`` | | ``raw_tp.w+`` [#rawtp]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``raw_tracepoint.w+`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_RAW_TRACEPOINT`` | | ``raw_tp+`` [#rawtp]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``raw_tracepoint+`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_SCHED_ACT`` | | ``action`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_SCHED_CLS`` | | ``classifier`` | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``tc`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_SK_LOOKUP`` | ``BPF_SK_LOOKUP`` | ``sk_lookup`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_SK_MSG`` | ``BPF_SK_MSG_VERDICT`` | ``sk_msg`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_SK_REUSEPORT`` | ``BPF_SK_REUSEPORT_SELECT_OR_MIGRATE`` | ``sk_reuseport/migrate`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_SK_REUSEPORT_SELECT`` | ``sk_reuseport`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_SK_SKB`` | | ``sk_skb`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_SK_SKB_STREAM_PARSER`` | ``sk_skb/stream_parser`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_SK_SKB_STREAM_VERDICT`` | ``sk_skb/stream_verdict`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_SOCKET_FILTER`` | | ``socket`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_SOCK_OPS`` | ``BPF_CGROUP_SOCK_OPS`` | ``sockops`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_STRUCT_OPS`` | | ``struct_ops+`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_SYSCALL`` | | ``syscall`` | Yes |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_TRACEPOINT`` | | ``tp+`` [#tp]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``tracepoint+`` [#tp]_ | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_TRACING`` | ``BPF_MODIFY_RETURN`` | ``fmod_ret+`` [#fentry]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``fmod_ret.s+`` [#fentry]_ | Yes |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_TRACE_FENTRY`` | ``fentry+`` [#fentry]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``fentry.s+`` [#fentry]_ | Yes |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_TRACE_FEXIT`` | ``fexit+`` [#fentry]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``fexit.s+`` [#fentry]_ | Yes |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_TRACE_ITER`` | ``iter+`` [#iter]_ | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``iter.s+`` [#iter]_ | Yes |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_TRACE_RAW_TP`` | ``tp_btf+`` [#fentry]_ | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
| ``BPF_PROG_TYPE_XDP`` | ``BPF_XDP_CPUMAP`` | ``xdp.frags/cpumap`` | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``xdp/cpumap`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_XDP_DEVMAP`` | ``xdp.frags/devmap`` | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``xdp/devmap`` | |
|
||||||
|
+ +----------------------------------------+----------------------------------+-----------+
|
||||||
|
| | ``BPF_XDP`` | ``xdp.frags`` | |
|
||||||
|
+ + +----------------------------------+-----------+
|
||||||
|
| | | ``xdp`` | |
|
||||||
|
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
|
||||||
|
|
||||||
|
|
||||||
|
.. rubric:: Footnotes
|
||||||
|
|
||||||
|
.. [#fentry] The ``fentry`` attach format is ``fentry[.s]/<function>``.
|
||||||
|
.. [#kprobe] The ``kprobe`` attach format is ``kprobe/<function>[+<offset>]``. Valid
|
||||||
|
characters for ``function`` are ``a-zA-Z0-9_.`` and ``offset`` must be a valid
|
||||||
|
non-negative integer.
|
||||||
|
.. [#ksyscall] The ``ksyscall`` attach format is ``ksyscall/<syscall>``.
|
||||||
|
.. [#uprobe] The ``uprobe`` attach format is ``uprobe[.s]/<path>:<function>[+<offset>]``.
|
||||||
|
.. [#usdt] The ``usdt`` attach format is ``usdt/<path>:<provider>:<name>``.
|
||||||
|
.. [#kpmulti] The ``kprobe.multi`` attach format is ``kprobe.multi/<pattern>`` where ``pattern``
|
||||||
|
supports ``*`` and ``?`` wildcards. Valid characters for pattern are
|
||||||
|
``a-zA-Z0-9_.*?``.
|
||||||
|
.. [#lsm] The ``lsm`` attachment format is ``lsm[.s]/<hook>``.
|
||||||
|
.. [#rawtp] The ``raw_tp`` attach format is ``raw_tracepoint[.w]/<tracepoint>``.
|
||||||
|
.. [#tp] The ``tracepoint`` attach format is ``tracepoint/<category>/<name>``.
|
||||||
|
.. [#iter] The ``iter`` attach format is ``iter[.s]/<struct-name>``.
|
@ -33,13 +33,13 @@ struct btf_type {
|
|||||||
/* "info" bits arrangement
|
/* "info" bits arrangement
|
||||||
* bits 0-15: vlen (e.g. # of struct's members)
|
* bits 0-15: vlen (e.g. # of struct's members)
|
||||||
* bits 16-23: unused
|
* bits 16-23: unused
|
||||||
* bits 24-27: kind (e.g. int, ptr, array...etc)
|
* bits 24-28: kind (e.g. int, ptr, array...etc)
|
||||||
* bits 28-30: unused
|
* bits 29-30: unused
|
||||||
* bit 31: kind_flag, currently used by
|
* bit 31: kind_flag, currently used by
|
||||||
* struct, union and fwd
|
* struct, union, enum, fwd and enum64
|
||||||
*/
|
*/
|
||||||
__u32 info;
|
__u32 info;
|
||||||
/* "size" is used by INT, ENUM, STRUCT, UNION and DATASEC.
|
/* "size" is used by INT, ENUM, STRUCT, UNION, DATASEC and ENUM64.
|
||||||
* "size" tells the size of the type it is describing.
|
* "size" tells the size of the type it is describing.
|
||||||
*
|
*
|
||||||
* "type" is used by PTR, TYPEDEF, VOLATILE, CONST, RESTRICT,
|
* "type" is used by PTR, TYPEDEF, VOLATILE, CONST, RESTRICT,
|
||||||
@ -63,7 +63,7 @@ enum {
|
|||||||
BTF_KIND_ARRAY = 3, /* Array */
|
BTF_KIND_ARRAY = 3, /* Array */
|
||||||
BTF_KIND_STRUCT = 4, /* Struct */
|
BTF_KIND_STRUCT = 4, /* Struct */
|
||||||
BTF_KIND_UNION = 5, /* Union */
|
BTF_KIND_UNION = 5, /* Union */
|
||||||
BTF_KIND_ENUM = 6, /* Enumeration */
|
BTF_KIND_ENUM = 6, /* Enumeration up to 32-bit values */
|
||||||
BTF_KIND_FWD = 7, /* Forward */
|
BTF_KIND_FWD = 7, /* Forward */
|
||||||
BTF_KIND_TYPEDEF = 8, /* Typedef */
|
BTF_KIND_TYPEDEF = 8, /* Typedef */
|
||||||
BTF_KIND_VOLATILE = 9, /* Volatile */
|
BTF_KIND_VOLATILE = 9, /* Volatile */
|
||||||
@ -76,6 +76,7 @@ enum {
|
|||||||
BTF_KIND_FLOAT = 16, /* Floating point */
|
BTF_KIND_FLOAT = 16, /* Floating point */
|
||||||
BTF_KIND_DECL_TAG = 17, /* Decl Tag */
|
BTF_KIND_DECL_TAG = 17, /* Decl Tag */
|
||||||
BTF_KIND_TYPE_TAG = 18, /* Type Tag */
|
BTF_KIND_TYPE_TAG = 18, /* Type Tag */
|
||||||
|
BTF_KIND_ENUM64 = 19, /* Enumeration up to 64-bit values */
|
||||||
|
|
||||||
NR_BTF_KINDS,
|
NR_BTF_KINDS,
|
||||||
BTF_KIND_MAX = NR_BTF_KINDS - 1,
|
BTF_KIND_MAX = NR_BTF_KINDS - 1,
|
||||||
@ -186,4 +187,14 @@ struct btf_decl_tag {
|
|||||||
__s32 component_idx;
|
__s32 component_idx;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/* BTF_KIND_ENUM64 is followed by multiple "struct btf_enum64".
|
||||||
|
* The exact number of btf_enum64 is stored in the vlen (of the
|
||||||
|
* info in "struct btf_type").
|
||||||
|
*/
|
||||||
|
struct btf_enum64 {
|
||||||
|
__u32 name_off;
|
||||||
|
__u32 val_lo32;
|
||||||
|
__u32 val_hi32;
|
||||||
|
};
|
||||||
|
|
||||||
#endif /* _UAPI__LINUX_BTF_H__ */
|
#endif /* _UAPI__LINUX_BTF_H__ */
|
||||||
|
114
include/uapi/linux/fcntl.h
Normal file
@ -0,0 +1,114 @@
|
|||||||
|
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
|
||||||
|
#ifndef _UAPI_LINUX_FCNTL_H
|
||||||
|
#define _UAPI_LINUX_FCNTL_H
|
||||||
|
|
||||||
|
#include <asm/fcntl.h>
|
||||||
|
#include <linux/openat2.h>
|
||||||
|
|
||||||
|
#define F_SETLEASE (F_LINUX_SPECIFIC_BASE + 0)
|
||||||
|
#define F_GETLEASE (F_LINUX_SPECIFIC_BASE + 1)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Cancel a blocking posix lock; internal use only until we expose an
|
||||||
|
* asynchronous lock api to userspace:
|
||||||
|
*/
|
||||||
|
#define F_CANCELLK (F_LINUX_SPECIFIC_BASE + 5)
|
||||||
|
|
||||||
|
/* Create a file descriptor with FD_CLOEXEC set. */
|
||||||
|
#define F_DUPFD_CLOEXEC (F_LINUX_SPECIFIC_BASE + 6)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Request nofications on a directory.
|
||||||
|
* See below for events that may be notified.
|
||||||
|
*/
|
||||||
|
#define F_NOTIFY (F_LINUX_SPECIFIC_BASE+2)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Set and get of pipe page size array
|
||||||
|
*/
|
||||||
|
#define F_SETPIPE_SZ (F_LINUX_SPECIFIC_BASE + 7)
|
||||||
|
#define F_GETPIPE_SZ (F_LINUX_SPECIFIC_BASE + 8)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Set/Get seals
|
||||||
|
*/
|
||||||
|
#define F_ADD_SEALS (F_LINUX_SPECIFIC_BASE + 9)
|
||||||
|
#define F_GET_SEALS (F_LINUX_SPECIFIC_BASE + 10)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Types of seals
|
||||||
|
*/
|
||||||
|
#define F_SEAL_SEAL 0x0001 /* prevent further seals from being set */
|
||||||
|
#define F_SEAL_SHRINK 0x0002 /* prevent file from shrinking */
|
||||||
|
#define F_SEAL_GROW 0x0004 /* prevent file from growing */
|
||||||
|
#define F_SEAL_WRITE 0x0008 /* prevent writes */
|
||||||
|
#define F_SEAL_FUTURE_WRITE 0x0010 /* prevent future writes while mapped */
|
||||||
|
/* (1U << 31) is reserved for signed error codes */
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Set/Get write life time hints. {GET,SET}_RW_HINT operate on the
|
||||||
|
* underlying inode, while {GET,SET}_FILE_RW_HINT operate only on
|
||||||
|
* the specific file.
|
||||||
|
*/
|
||||||
|
#define F_GET_RW_HINT (F_LINUX_SPECIFIC_BASE + 11)
|
||||||
|
#define F_SET_RW_HINT (F_LINUX_SPECIFIC_BASE + 12)
|
||||||
|
#define F_GET_FILE_RW_HINT (F_LINUX_SPECIFIC_BASE + 13)
|
||||||
|
#define F_SET_FILE_RW_HINT (F_LINUX_SPECIFIC_BASE + 14)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Valid hint values for F_{GET,SET}_RW_HINT. 0 is "not set", or can be
|
||||||
|
* used to clear any hints previously set.
|
||||||
|
*/
|
||||||
|
#define RWH_WRITE_LIFE_NOT_SET 0
|
||||||
|
#define RWH_WRITE_LIFE_NONE 1
|
||||||
|
#define RWH_WRITE_LIFE_SHORT 2
|
||||||
|
#define RWH_WRITE_LIFE_MEDIUM 3
|
||||||
|
#define RWH_WRITE_LIFE_LONG 4
|
||||||
|
#define RWH_WRITE_LIFE_EXTREME 5
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The originally introduced spelling is remained from the first
|
||||||
|
* versions of the patch set that introduced the feature, see commit
|
||||||
|
* v4.13-rc1~212^2~51.
|
||||||
|
*/
|
||||||
|
#define RWF_WRITE_LIFE_NOT_SET RWH_WRITE_LIFE_NOT_SET
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Types of directory notifications that may be requested.
|
||||||
|
*/
|
||||||
|
#define DN_ACCESS 0x00000001 /* File accessed */
|
||||||
|
#define DN_MODIFY 0x00000002 /* File modified */
|
||||||
|
#define DN_CREATE 0x00000004 /* File created */
|
||||||
|
#define DN_DELETE 0x00000008 /* File removed */
|
||||||
|
#define DN_RENAME 0x00000010 /* File renamed */
|
||||||
|
#define DN_ATTRIB 0x00000020 /* File changed attibutes */
|
||||||
|
#define DN_MULTISHOT 0x80000000 /* Don't remove notifier */
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The constants AT_REMOVEDIR and AT_EACCESS have the same value. AT_EACCESS is
|
||||||
|
* meaningful only to faccessat, while AT_REMOVEDIR is meaningful only to
|
||||||
|
* unlinkat. The two functions do completely different things and therefore,
|
||||||
|
* the flags can be allowed to overlap. For example, passing AT_REMOVEDIR to
|
||||||
|
* faccessat would be undefined behavior and thus treating it equivalent to
|
||||||
|
* AT_EACCESS is valid undefined behavior.
|
||||||
|
*/
|
||||||
|
#define AT_FDCWD -100 /* Special value used to indicate
|
||||||
|
openat should use the current
|
||||||
|
working directory. */
|
||||||
|
#define AT_SYMLINK_NOFOLLOW 0x100 /* Do not follow symbolic links. */
|
||||||
|
#define AT_EACCESS 0x200 /* Test access permitted for
|
||||||
|
effective IDs, not real IDs. */
|
||||||
|
#define AT_REMOVEDIR 0x200 /* Remove directory instead of
|
||||||
|
unlinking file. */
|
||||||
|
#define AT_SYMLINK_FOLLOW 0x400 /* Follow symbolic links. */
|
||||||
|
#define AT_NO_AUTOMOUNT 0x800 /* Suppress terminal automount traversal */
|
||||||
|
#define AT_EMPTY_PATH 0x1000 /* Allow empty relative pathname */
|
||||||
|
|
||||||
|
#define AT_STATX_SYNC_TYPE 0x6000 /* Type of synchronisation required from statx() */
|
||||||
|
#define AT_STATX_SYNC_AS_STAT 0x0000 /* - Do whatever stat() does */
|
||||||
|
#define AT_STATX_FORCE_SYNC 0x2000 /* - Force the attributes to be sync'd with the server */
|
||||||
|
#define AT_STATX_DONT_SYNC 0x4000 /* - Don't sync attributes with the server */
|
||||||
|
|
||||||
|
#define AT_RECURSIVE 0x8000 /* Apply to the entire subtree */
|
||||||
|
|
||||||
|
#endif /* _UAPI_LINUX_FCNTL_H */
|
@ -348,6 +348,8 @@ enum {
|
|||||||
IFLA_PARENT_DEV_NAME,
|
IFLA_PARENT_DEV_NAME,
|
||||||
IFLA_PARENT_DEV_BUS_NAME,
|
IFLA_PARENT_DEV_BUS_NAME,
|
||||||
IFLA_GRO_MAX_SIZE,
|
IFLA_GRO_MAX_SIZE,
|
||||||
|
IFLA_TSO_MAX_SIZE,
|
||||||
|
IFLA_TSO_MAX_SEGS,
|
||||||
|
|
||||||
__IFLA_MAX
|
__IFLA_MAX
|
||||||
};
|
};
|
||||||
@ -671,6 +673,7 @@ enum {
|
|||||||
IFLA_XFRM_UNSPEC,
|
IFLA_XFRM_UNSPEC,
|
||||||
IFLA_XFRM_LINK,
|
IFLA_XFRM_LINK,
|
||||||
IFLA_XFRM_IF_ID,
|
IFLA_XFRM_IF_ID,
|
||||||
|
IFLA_XFRM_COLLECT_METADATA,
|
||||||
__IFLA_XFRM_MAX
|
__IFLA_XFRM_MAX
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -860,6 +863,7 @@ enum {
|
|||||||
IFLA_BOND_PEER_NOTIF_DELAY,
|
IFLA_BOND_PEER_NOTIF_DELAY,
|
||||||
IFLA_BOND_AD_LACP_ACTIVE,
|
IFLA_BOND_AD_LACP_ACTIVE,
|
||||||
IFLA_BOND_MISSED_MAX,
|
IFLA_BOND_MISSED_MAX,
|
||||||
|
IFLA_BOND_NS_IP6_TARGET,
|
||||||
__IFLA_BOND_MAX,
|
__IFLA_BOND_MAX,
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -887,6 +891,7 @@ enum {
|
|||||||
IFLA_BOND_SLAVE_AD_AGGREGATOR_ID,
|
IFLA_BOND_SLAVE_AD_AGGREGATOR_ID,
|
||||||
IFLA_BOND_SLAVE_AD_ACTOR_OPER_PORT_STATE,
|
IFLA_BOND_SLAVE_AD_ACTOR_OPER_PORT_STATE,
|
||||||
IFLA_BOND_SLAVE_AD_PARTNER_OPER_PORT_STATE,
|
IFLA_BOND_SLAVE_AD_PARTNER_OPER_PORT_STATE,
|
||||||
|
IFLA_BOND_SLAVE_PRIO,
|
||||||
__IFLA_BOND_SLAVE_MAX,
|
__IFLA_BOND_SLAVE_MAX,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -164,8 +164,6 @@ enum perf_event_sample_format {
|
|||||||
PERF_SAMPLE_WEIGHT_STRUCT = 1U << 24,
|
PERF_SAMPLE_WEIGHT_STRUCT = 1U << 24,
|
||||||
|
|
||||||
PERF_SAMPLE_MAX = 1U << 25, /* non-ABI */
|
PERF_SAMPLE_MAX = 1U << 25, /* non-ABI */
|
||||||
|
|
||||||
__PERF_SAMPLE_CALLCHAIN_EARLY = 1ULL << 63, /* non-ABI; internal use */
|
|
||||||
};
|
};
|
||||||
|
|
||||||
#define PERF_SAMPLE_WEIGHT_TYPE (PERF_SAMPLE_WEIGHT | PERF_SAMPLE_WEIGHT_STRUCT)
|
#define PERF_SAMPLE_WEIGHT_TYPE (PERF_SAMPLE_WEIGHT | PERF_SAMPLE_WEIGHT_STRUCT)
|
||||||
@ -204,6 +202,8 @@ enum perf_branch_sample_type_shift {
|
|||||||
|
|
||||||
PERF_SAMPLE_BRANCH_HW_INDEX_SHIFT = 17, /* save low level index of raw branch records */
|
PERF_SAMPLE_BRANCH_HW_INDEX_SHIFT = 17, /* save low level index of raw branch records */
|
||||||
|
|
||||||
|
PERF_SAMPLE_BRANCH_PRIV_SAVE_SHIFT = 18, /* save privilege mode */
|
||||||
|
|
||||||
PERF_SAMPLE_BRANCH_MAX_SHIFT /* non-ABI */
|
PERF_SAMPLE_BRANCH_MAX_SHIFT /* non-ABI */
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -233,6 +233,8 @@ enum perf_branch_sample_type {
|
|||||||
|
|
||||||
PERF_SAMPLE_BRANCH_HW_INDEX = 1U << PERF_SAMPLE_BRANCH_HW_INDEX_SHIFT,
|
PERF_SAMPLE_BRANCH_HW_INDEX = 1U << PERF_SAMPLE_BRANCH_HW_INDEX_SHIFT,
|
||||||
|
|
||||||
|
PERF_SAMPLE_BRANCH_PRIV_SAVE = 1U << PERF_SAMPLE_BRANCH_PRIV_SAVE_SHIFT,
|
||||||
|
|
||||||
PERF_SAMPLE_BRANCH_MAX = 1U << PERF_SAMPLE_BRANCH_MAX_SHIFT,
|
PERF_SAMPLE_BRANCH_MAX = 1U << PERF_SAMPLE_BRANCH_MAX_SHIFT,
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -251,9 +253,50 @@ enum {
|
|||||||
PERF_BR_SYSRET = 8, /* syscall return */
|
PERF_BR_SYSRET = 8, /* syscall return */
|
||||||
PERF_BR_COND_CALL = 9, /* conditional function call */
|
PERF_BR_COND_CALL = 9, /* conditional function call */
|
||||||
PERF_BR_COND_RET = 10, /* conditional function return */
|
PERF_BR_COND_RET = 10, /* conditional function return */
|
||||||
|
PERF_BR_ERET = 11, /* exception return */
|
||||||
|
PERF_BR_IRQ = 12, /* irq */
|
||||||
|
PERF_BR_SERROR = 13, /* system error */
|
||||||
|
PERF_BR_NO_TX = 14, /* not in transaction */
|
||||||
|
PERF_BR_EXTEND_ABI = 15, /* extend ABI */
|
||||||
PERF_BR_MAX,
|
PERF_BR_MAX,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Common branch speculation outcome classification
|
||||||
|
*/
|
||||||
|
enum {
|
||||||
|
PERF_BR_SPEC_NA = 0, /* Not available */
|
||||||
|
PERF_BR_SPEC_WRONG_PATH = 1, /* Speculative but on wrong path */
|
||||||
|
PERF_BR_NON_SPEC_CORRECT_PATH = 2, /* Non-speculative but on correct path */
|
||||||
|
PERF_BR_SPEC_CORRECT_PATH = 3, /* Speculative and on correct path */
|
||||||
|
PERF_BR_SPEC_MAX,
|
||||||
|
};
|
||||||
|
|
||||||
|
enum {
|
||||||
|
PERF_BR_NEW_FAULT_ALGN = 0, /* Alignment fault */
|
||||||
|
PERF_BR_NEW_FAULT_DATA = 1, /* Data fault */
|
||||||
|
PERF_BR_NEW_FAULT_INST = 2, /* Inst fault */
|
||||||
|
PERF_BR_NEW_ARCH_1 = 3, /* Architecture specific */
|
||||||
|
PERF_BR_NEW_ARCH_2 = 4, /* Architecture specific */
|
||||||
|
PERF_BR_NEW_ARCH_3 = 5, /* Architecture specific */
|
||||||
|
PERF_BR_NEW_ARCH_4 = 6, /* Architecture specific */
|
||||||
|
PERF_BR_NEW_ARCH_5 = 7, /* Architecture specific */
|
||||||
|
PERF_BR_NEW_MAX,
|
||||||
|
};
|
||||||
|
|
||||||
|
enum {
|
||||||
|
PERF_BR_PRIV_UNKNOWN = 0,
|
||||||
|
PERF_BR_PRIV_USER = 1,
|
||||||
|
PERF_BR_PRIV_KERNEL = 2,
|
||||||
|
PERF_BR_PRIV_HV = 3,
|
||||||
|
};
|
||||||
|
|
||||||
|
#define PERF_BR_ARM64_FIQ PERF_BR_NEW_ARCH_1
|
||||||
|
#define PERF_BR_ARM64_DEBUG_HALT PERF_BR_NEW_ARCH_2
|
||||||
|
#define PERF_BR_ARM64_DEBUG_EXIT PERF_BR_NEW_ARCH_3
|
||||||
|
#define PERF_BR_ARM64_DEBUG_INST PERF_BR_NEW_ARCH_4
|
||||||
|
#define PERF_BR_ARM64_DEBUG_DATA PERF_BR_NEW_ARCH_5
|
||||||
|
|
||||||
#define PERF_SAMPLE_BRANCH_PLM_ALL \
|
#define PERF_SAMPLE_BRANCH_PLM_ALL \
|
||||||
(PERF_SAMPLE_BRANCH_USER|\
|
(PERF_SAMPLE_BRANCH_USER|\
|
||||||
PERF_SAMPLE_BRANCH_KERNEL|\
|
PERF_SAMPLE_BRANCH_KERNEL|\
|
||||||
@ -299,6 +342,7 @@ enum {
|
|||||||
* { u64 time_enabled; } && PERF_FORMAT_TOTAL_TIME_ENABLED
|
* { u64 time_enabled; } && PERF_FORMAT_TOTAL_TIME_ENABLED
|
||||||
* { u64 time_running; } && PERF_FORMAT_TOTAL_TIME_RUNNING
|
* { u64 time_running; } && PERF_FORMAT_TOTAL_TIME_RUNNING
|
||||||
* { u64 id; } && PERF_FORMAT_ID
|
* { u64 id; } && PERF_FORMAT_ID
|
||||||
|
* { u64 lost; } && PERF_FORMAT_LOST
|
||||||
* } && !PERF_FORMAT_GROUP
|
* } && !PERF_FORMAT_GROUP
|
||||||
*
|
*
|
||||||
* { u64 nr;
|
* { u64 nr;
|
||||||
@ -306,6 +350,7 @@ enum {
|
|||||||
* { u64 time_running; } && PERF_FORMAT_TOTAL_TIME_RUNNING
|
* { u64 time_running; } && PERF_FORMAT_TOTAL_TIME_RUNNING
|
||||||
* { u64 value;
|
* { u64 value;
|
||||||
* { u64 id; } && PERF_FORMAT_ID
|
* { u64 id; } && PERF_FORMAT_ID
|
||||||
|
* { u64 lost; } && PERF_FORMAT_LOST
|
||||||
* } cntr[nr];
|
* } cntr[nr];
|
||||||
* } && PERF_FORMAT_GROUP
|
* } && PERF_FORMAT_GROUP
|
||||||
* };
|
* };
|
||||||
@ -315,8 +360,9 @@ enum perf_event_read_format {
|
|||||||
PERF_FORMAT_TOTAL_TIME_RUNNING = 1U << 1,
|
PERF_FORMAT_TOTAL_TIME_RUNNING = 1U << 1,
|
||||||
PERF_FORMAT_ID = 1U << 2,
|
PERF_FORMAT_ID = 1U << 2,
|
||||||
PERF_FORMAT_GROUP = 1U << 3,
|
PERF_FORMAT_GROUP = 1U << 3,
|
||||||
|
PERF_FORMAT_LOST = 1U << 4,
|
||||||
|
|
||||||
PERF_FORMAT_MAX = 1U << 4, /* non-ABI */
|
PERF_FORMAT_MAX = 1U << 5, /* non-ABI */
|
||||||
};
|
};
|
||||||
|
|
||||||
#define PERF_ATTR_SIZE_VER0 64 /* sizeof first published struct */
|
#define PERF_ATTR_SIZE_VER0 64 /* sizeof first published struct */
|
||||||
@ -465,6 +511,8 @@ struct perf_event_attr {
|
|||||||
/*
|
/*
|
||||||
* User provided data if sigtrap=1, passed back to user via
|
* User provided data if sigtrap=1, passed back to user via
|
||||||
* siginfo_t::si_perf_data, e.g. to permit user to identify the event.
|
* siginfo_t::si_perf_data, e.g. to permit user to identify the event.
|
||||||
|
* Note, siginfo_t::si_perf_data is long-sized, and sig_data will be
|
||||||
|
* truncated accordingly on 32 bit architectures.
|
||||||
*/
|
*/
|
||||||
__u64 sig_data;
|
__u64 sig_data;
|
||||||
};
|
};
|
||||||
@ -487,7 +535,7 @@ struct perf_event_query_bpf {
|
|||||||
/*
|
/*
|
||||||
* User provided buffer to store program ids
|
* User provided buffer to store program ids
|
||||||
*/
|
*/
|
||||||
__u32 ids[0];
|
__u32 ids[];
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -1288,7 +1336,9 @@ union perf_mem_data_src {
|
|||||||
#define PERF_MEM_LVLNUM_L2 0x02 /* L2 */
|
#define PERF_MEM_LVLNUM_L2 0x02 /* L2 */
|
||||||
#define PERF_MEM_LVLNUM_L3 0x03 /* L3 */
|
#define PERF_MEM_LVLNUM_L3 0x03 /* L3 */
|
||||||
#define PERF_MEM_LVLNUM_L4 0x04 /* L4 */
|
#define PERF_MEM_LVLNUM_L4 0x04 /* L4 */
|
||||||
/* 5-0xa available */
|
/* 5-0x8 available */
|
||||||
|
#define PERF_MEM_LVLNUM_CXL 0x09 /* CXL */
|
||||||
|
#define PERF_MEM_LVLNUM_IO 0x0a /* I/O */
|
||||||
#define PERF_MEM_LVLNUM_ANY_CACHE 0x0b /* Any cache */
|
#define PERF_MEM_LVLNUM_ANY_CACHE 0x0b /* Any cache */
|
||||||
#define PERF_MEM_LVLNUM_LFB 0x0c /* LFB */
|
#define PERF_MEM_LVLNUM_LFB 0x0c /* LFB */
|
||||||
#define PERF_MEM_LVLNUM_RAM 0x0d /* RAM */
|
#define PERF_MEM_LVLNUM_RAM 0x0d /* RAM */
|
||||||
@ -1306,7 +1356,7 @@ union perf_mem_data_src {
|
|||||||
#define PERF_MEM_SNOOP_SHIFT 19
|
#define PERF_MEM_SNOOP_SHIFT 19
|
||||||
|
|
||||||
#define PERF_MEM_SNOOPX_FWD 0x01 /* forward */
|
#define PERF_MEM_SNOOPX_FWD 0x01 /* forward */
|
||||||
/* 1 free */
|
#define PERF_MEM_SNOOPX_PEER 0x02 /* xfer from peer */
|
||||||
#define PERF_MEM_SNOOPX_SHIFT 38
|
#define PERF_MEM_SNOOPX_SHIFT 38
|
||||||
|
|
||||||
/* locked instruction */
|
/* locked instruction */
|
||||||
@ -1356,6 +1406,7 @@ union perf_mem_data_src {
|
|||||||
* abort: aborting a hardware transaction
|
* abort: aborting a hardware transaction
|
||||||
* cycles: cycles from last branch (or 0 if not supported)
|
* cycles: cycles from last branch (or 0 if not supported)
|
||||||
* type: branch type
|
* type: branch type
|
||||||
|
* spec: branch speculation info (or 0 if not supported)
|
||||||
*/
|
*/
|
||||||
struct perf_branch_entry {
|
struct perf_branch_entry {
|
||||||
__u64 from;
|
__u64 from;
|
||||||
@ -1366,7 +1417,10 @@ struct perf_branch_entry {
|
|||||||
abort:1, /* transaction abort */
|
abort:1, /* transaction abort */
|
||||||
cycles:16, /* cycle count to last branch */
|
cycles:16, /* cycle count to last branch */
|
||||||
type:4, /* branch type */
|
type:4, /* branch type */
|
||||||
reserved:40;
|
spec:2, /* branch speculation info */
|
||||||
|
new_type:4, /* additional branch type */
|
||||||
|
priv:3, /* privilege level */
|
||||||
|
reserved:31;
|
||||||
};
|
};
|
||||||
|
|
||||||
union perf_sample_weight {
|
union perf_sample_weight {
|
||||||
|
@ -180,7 +180,7 @@ struct tc_u32_sel {
|
|||||||
|
|
||||||
short hoff;
|
short hoff;
|
||||||
__be32 hmask;
|
__be32 hmask;
|
||||||
struct tc_u32_key keys[0];
|
struct tc_u32_key keys[];
|
||||||
};
|
};
|
||||||
|
|
||||||
struct tc_u32_mark {
|
struct tc_u32_mark {
|
||||||
@ -192,7 +192,7 @@ struct tc_u32_mark {
|
|||||||
struct tc_u32_pcnt {
|
struct tc_u32_pcnt {
|
||||||
__u64 rcnt;
|
__u64 rcnt;
|
||||||
__u64 rhit;
|
__u64 rhit;
|
||||||
__u64 kcnts[0];
|
__u64 kcnts[];
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Flags */
|
/* Flags */
|
||||||
|
@ -17,6 +17,24 @@ mkdir -p "$OUT"
|
|||||||
|
|
||||||
export LIB_FUZZING_ENGINE=${LIB_FUZZING_ENGINE:--fsanitize=fuzzer}
|
export LIB_FUZZING_ENGINE=${LIB_FUZZING_ENGINE:--fsanitize=fuzzer}
|
||||||
|
|
||||||
|
# libelf is compiled with _FORTIFY_SOURCE by default and it
|
||||||
|
# isn't compatible with MSan. It was borrowed
|
||||||
|
# from https://github.com/google/oss-fuzz/pull/7422
|
||||||
|
if [[ "$SANITIZER" == memory ]]; then
|
||||||
|
CFLAGS+=" -U_FORTIFY_SOURCE"
|
||||||
|
CXXFLAGS+=" -U_FORTIFY_SOURCE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# The alignment check is turned off by default on OSS-Fuzz/CFLite so it should be
|
||||||
|
# turned on explicitly there. It was borrowed from
|
||||||
|
# https://github.com/google/oss-fuzz/pull/7092
|
||||||
|
if [[ "$SANITIZER" == undefined ]]; then
|
||||||
|
additional_ubsan_checks=alignment
|
||||||
|
UBSAN_FLAGS="-fsanitize=$additional_ubsan_checks -fno-sanitize-recover=$additional_ubsan_checks"
|
||||||
|
CFLAGS+=" $UBSAN_FLAGS"
|
||||||
|
CXXFLAGS+=" $UBSAN_FLAGS"
|
||||||
|
fi
|
||||||
|
|
||||||
# Ideally libbelf should be built using release tarballs available
|
# Ideally libbelf should be built using release tarballs available
|
||||||
# at https://sourceware.org/elfutils/ftp/. Unfortunately sometimes they
|
# at https://sourceware.org/elfutils/ftp/. Unfortunately sometimes they
|
||||||
# fail to compile (for example, elfutils-0.185 fails to compile with LDFLAGS enabled
|
# fail to compile (for example, elfutils-0.185 fails to compile with LDFLAGS enabled
|
||||||
@ -26,7 +44,7 @@ rm -rf elfutils
|
|||||||
git clone git://sourceware.org/git/elfutils.git
|
git clone git://sourceware.org/git/elfutils.git
|
||||||
(
|
(
|
||||||
cd elfutils
|
cd elfutils
|
||||||
git checkout 983e86fd89e8bf02f2d27ba5dce5bf078af4ceda
|
git checkout e9f3045caa5c4498f371383e5519151942d48b6d
|
||||||
git log --oneline -1
|
git log --oneline -1
|
||||||
|
|
||||||
# ASan isn't compatible with -Wl,--no-undefined: https://github.com/google/sanitizers/issues/380
|
# ASan isn't compatible with -Wl,--no-undefined: https://github.com/google/sanitizers/issues/380
|
||||||
@ -36,6 +54,11 @@ find -name Makefile.am | xargs sed -i 's/,--no-undefined//'
|
|||||||
# https://clang.llvm.org/docs/AddressSanitizer.html#usage
|
# https://clang.llvm.org/docs/AddressSanitizer.html#usage
|
||||||
sed -i 's/^\(ZDEFS_LDFLAGS=\).*/\1/' configure.ac
|
sed -i 's/^\(ZDEFS_LDFLAGS=\).*/\1/' configure.ac
|
||||||
|
|
||||||
|
if [[ "$SANITIZER" == undefined ]]; then
|
||||||
|
# That's basicaly what --enable-sanitize-undefined does to turn off unaligned access
|
||||||
|
# elfutils heavily relies on on i386/x86_64 but without changing compiler flags along the way
|
||||||
|
sed -i 's/\(check_undefined_val\)=[0-9]/\1=1/' configure.ac
|
||||||
|
fi
|
||||||
|
|
||||||
autoreconf -i -f
|
autoreconf -i -f
|
||||||
if ! ./configure --enable-maintainer-mode --disable-debuginfod --disable-libdebuginfod \
|
if ! ./configure --enable-maintainer-mode --disable-debuginfod --disable-libdebuginfod \
|
||||||
|
@ -42,6 +42,7 @@ PATH_MAP=( \
|
|||||||
[tools/include/uapi/linux/bpf_common.h]=include/uapi/linux/bpf_common.h \
|
[tools/include/uapi/linux/bpf_common.h]=include/uapi/linux/bpf_common.h \
|
||||||
[tools/include/uapi/linux/bpf.h]=include/uapi/linux/bpf.h \
|
[tools/include/uapi/linux/bpf.h]=include/uapi/linux/bpf.h \
|
||||||
[tools/include/uapi/linux/btf.h]=include/uapi/linux/btf.h \
|
[tools/include/uapi/linux/btf.h]=include/uapi/linux/btf.h \
|
||||||
|
[tools/include/uapi/linux/fcntl.h]=include/uapi/linux/fcntl.h \
|
||||||
[tools/include/uapi/linux/if_link.h]=include/uapi/linux/if_link.h \
|
[tools/include/uapi/linux/if_link.h]=include/uapi/linux/if_link.h \
|
||||||
[tools/include/uapi/linux/if_xdp.h]=include/uapi/linux/if_xdp.h \
|
[tools/include/uapi/linux/if_xdp.h]=include/uapi/linux/if_xdp.h \
|
||||||
[tools/include/uapi/linux/netlink.h]=include/uapi/linux/netlink.h \
|
[tools/include/uapi/linux/netlink.h]=include/uapi/linux/netlink.h \
|
||||||
@ -51,8 +52,8 @@ PATH_MAP=( \
|
|||||||
[Documentation/bpf/libbpf]=docs \
|
[Documentation/bpf/libbpf]=docs \
|
||||||
)
|
)
|
||||||
|
|
||||||
LIBBPF_PATHS="${!PATH_MAP[@]} :^tools/lib/bpf/Makefile :^tools/lib/bpf/Build :^tools/lib/bpf/.gitignore :^tools/include/tools/libc_compat.h"
|
LIBBPF_PATHS=("${!PATH_MAP[@]}" ":^tools/lib/bpf/Makefile" ":^tools/lib/bpf/Build" ":^tools/lib/bpf/.gitignore" ":^tools/include/tools/libc_compat.h")
|
||||||
LIBBPF_VIEW_PATHS="${PATH_MAP[@]}"
|
LIBBPF_VIEW_PATHS=("${PATH_MAP[@]}")
|
||||||
LIBBPF_VIEW_EXCLUDE_REGEX='^src/(Makefile|Build|test_libbpf\.c|bpf_helper_defs\.h|\.gitignore)$|^docs/(\.gitignore|api\.rst|conf\.py)$|^docs/sphinx/.*'
|
LIBBPF_VIEW_EXCLUDE_REGEX='^src/(Makefile|Build|test_libbpf\.c|bpf_helper_defs\.h|\.gitignore)$|^docs/(\.gitignore|api\.rst|conf\.py)$|^docs/sphinx/.*'
|
||||||
LINUX_VIEW_EXCLUDE_REGEX='^include/tools/libc_compat.h$'
|
LINUX_VIEW_EXCLUDE_REGEX='^include/tools/libc_compat.h$'
|
||||||
|
|
||||||
@ -85,7 +86,9 @@ commit_desc()
|
|||||||
# $2 - paths filter
|
# $2 - paths filter
|
||||||
commit_signature()
|
commit_signature()
|
||||||
{
|
{
|
||||||
git show --pretty='("%s")|%aI|%b' --shortstat $1 -- ${2-.} | tr '\n' '|'
|
local ref=$1
|
||||||
|
shift
|
||||||
|
git show --pretty='("%s")|%aI|%b' --shortstat $ref -- "${@-.}" | tr '\n' '|'
|
||||||
}
|
}
|
||||||
|
|
||||||
# Cherry-pick commits touching libbpf-related files
|
# Cherry-pick commits touching libbpf-related files
|
||||||
@ -104,7 +107,7 @@ cherry_pick_commits()
|
|||||||
local libbpf_conflict_cnt
|
local libbpf_conflict_cnt
|
||||||
local desc
|
local desc
|
||||||
|
|
||||||
new_commits=$(git rev-list --no-merges --topo-order --reverse ${baseline_tag}..${tip_tag} ${LIBBPF_PATHS[@]})
|
new_commits=$(git rev-list --no-merges --topo-order --reverse ${baseline_tag}..${tip_tag} -- "${LIBBPF_PATHS[@]}")
|
||||||
for new_commit in ${new_commits}; do
|
for new_commit in ${new_commits}; do
|
||||||
desc="$(commit_desc ${new_commit})"
|
desc="$(commit_desc ${new_commit})"
|
||||||
signature="$(commit_signature ${new_commit} "${LIBBPF_PATHS[@]}")"
|
signature="$(commit_signature ${new_commit} "${LIBBPF_PATHS[@]}")"
|
||||||
@ -138,7 +141,7 @@ cherry_pick_commits()
|
|||||||
echo "Picking '${desc}'..."
|
echo "Picking '${desc}'..."
|
||||||
if ! git cherry-pick ${new_commit} &>/dev/null; then
|
if ! git cherry-pick ${new_commit} &>/dev/null; then
|
||||||
echo "Warning! Cherry-picking '${desc} failed, checking if it's non-libbpf files causing problems..."
|
echo "Warning! Cherry-picking '${desc} failed, checking if it's non-libbpf files causing problems..."
|
||||||
libbpf_conflict_cnt=$(git diff --name-only --diff-filter=U -- ${LIBBPF_PATHS[@]} | wc -l)
|
libbpf_conflict_cnt=$(git diff --name-only --diff-filter=U -- "${LIBBPF_PATHS[@]}" | wc -l)
|
||||||
conflict_cnt=$(git diff --name-only | wc -l)
|
conflict_cnt=$(git diff --name-only | wc -l)
|
||||||
prompt_resolution=1
|
prompt_resolution=1
|
||||||
|
|
||||||
@ -284,7 +287,7 @@ cd_to ${LIBBPF_REPO}
|
|||||||
helpers_changes=$(git status --porcelain src/bpf_helper_defs.h | wc -l)
|
helpers_changes=$(git status --porcelain src/bpf_helper_defs.h | wc -l)
|
||||||
if ((${helpers_changes} == 1)); then
|
if ((${helpers_changes} == 1)); then
|
||||||
git add src/bpf_helper_defs.h
|
git add src/bpf_helper_defs.h
|
||||||
git commit -m "sync: auto-generate latest BPF helpers
|
git commit -s -m "sync: auto-generate latest BPF helpers
|
||||||
|
|
||||||
Latest changes to BPF helper definitions.
|
Latest changes to BPF helper definitions.
|
||||||
" -- src/bpf_helper_defs.h
|
" -- src/bpf_helper_defs.h
|
||||||
@ -306,7 +309,7 @@ Baseline bpf-next commit: ${BASELINE_COMMIT}\n\
|
|||||||
Checkpoint bpf-next commit: ${TIP_COMMIT}\n\
|
Checkpoint bpf-next commit: ${TIP_COMMIT}\n\
|
||||||
Baseline bpf commit: ${BPF_BASELINE_COMMIT}\n\
|
Baseline bpf commit: ${BPF_BASELINE_COMMIT}\n\
|
||||||
Checkpoint bpf commit: ${BPF_TIP_COMMIT}/" | \
|
Checkpoint bpf commit: ${BPF_TIP_COMMIT}/" | \
|
||||||
git commit --file=-
|
git commit -s --file=-
|
||||||
|
|
||||||
echo "SUCCESS! ${COMMIT_CNT} commits synced."
|
echo "SUCCESS! ${COMMIT_CNT} commits synced."
|
||||||
|
|
||||||
@ -316,10 +319,10 @@ cd_to ${LINUX_REPO}
|
|||||||
git checkout -b ${VIEW_TAG} ${TIP_COMMIT}
|
git checkout -b ${VIEW_TAG} ${TIP_COMMIT}
|
||||||
FILTER_BRANCH_SQUELCH_WARNING=1 git filter-branch -f --tree-filter "${LIBBPF_TREE_FILTER}" ${VIEW_TAG}^..${VIEW_TAG}
|
FILTER_BRANCH_SQUELCH_WARNING=1 git filter-branch -f --tree-filter "${LIBBPF_TREE_FILTER}" ${VIEW_TAG}^..${VIEW_TAG}
|
||||||
FILTER_BRANCH_SQUELCH_WARNING=1 git filter-branch -f --subdirectory-filter __libbpf ${VIEW_TAG}^..${VIEW_TAG}
|
FILTER_BRANCH_SQUELCH_WARNING=1 git filter-branch -f --subdirectory-filter __libbpf ${VIEW_TAG}^..${VIEW_TAG}
|
||||||
git ls-files -- ${LIBBPF_VIEW_PATHS[@]} | grep -v -E "${LINUX_VIEW_EXCLUDE_REGEX}" > ${TMP_DIR}/linux-view.ls
|
git ls-files -- "${LIBBPF_VIEW_PATHS[@]}" | grep -v -E "${LINUX_VIEW_EXCLUDE_REGEX}" > ${TMP_DIR}/linux-view.ls
|
||||||
|
|
||||||
cd_to ${LIBBPF_REPO}
|
cd_to ${LIBBPF_REPO}
|
||||||
git ls-files -- ${LIBBPF_VIEW_PATHS[@]} | grep -v -E "${LIBBPF_VIEW_EXCLUDE_REGEX}" > ${TMP_DIR}/github-view.ls
|
git ls-files -- "${LIBBPF_VIEW_PATHS[@]}" | grep -v -E "${LIBBPF_VIEW_EXCLUDE_REGEX}" > ${TMP_DIR}/github-view.ls
|
||||||
|
|
||||||
echo "Comparing list of files..."
|
echo "Comparing list of files..."
|
||||||
diff -u ${TMP_DIR}/linux-view.ls ${TMP_DIR}/github-view.ls
|
diff -u ${TMP_DIR}/linux-view.ls ${TMP_DIR}/github-view.ls
|
||||||
|
45
src/Makefile
@ -8,10 +8,24 @@ else
|
|||||||
msg = @printf ' %-8s %s%s\n' "$(1)" "$(2)" "$(if $(3), $(3))";
|
msg = @printf ' %-8s %s%s\n' "$(1)" "$(2)" "$(if $(3), $(3))";
|
||||||
endif
|
endif
|
||||||
|
|
||||||
LIBBPF_VERSION := $(shell \
|
LIBBPF_MAJOR_VERSION := 1
|
||||||
grep -oE '^LIBBPF_([0-9.]+)' libbpf.map | \
|
LIBBPF_MINOR_VERSION := 1
|
||||||
sort -rV | head -n1 | cut -d'_' -f2)
|
LIBBPF_PATCH_VERSION := 0
|
||||||
LIBBPF_MAJOR_VERSION := $(firstword $(subst ., ,$(LIBBPF_VERSION)))
|
LIBBPF_VERSION := $(LIBBPF_MAJOR_VERSION).$(LIBBPF_MINOR_VERSION).$(LIBBPF_PATCH_VERSION)
|
||||||
|
LIBBPF_MAJMIN_VERSION := $(LIBBPF_MAJOR_VERSION).$(LIBBPF_MINOR_VERSION).0
|
||||||
|
LIBBPF_MAP_VERSION := $(shell grep -oE '^LIBBPF_([0-9.]+)' libbpf.map | sort -rV | head -n1 | cut -d'_' -f2)
|
||||||
|
ifneq ($(LIBBPF_MAJMIN_VERSION), $(LIBBPF_MAP_VERSION))
|
||||||
|
$(error Libbpf release ($(LIBBPF_VERSION)) and map ($(LIBBPF_MAP_VERSION)) versions are out of sync!)
|
||||||
|
endif
|
||||||
|
|
||||||
|
define allow-override
|
||||||
|
$(if $(or $(findstring environment,$(origin $(1))),\
|
||||||
|
$(findstring command line,$(origin $(1)))),,\
|
||||||
|
$(eval $(1) = $(2)))
|
||||||
|
endef
|
||||||
|
|
||||||
|
$(call allow-override,CC,$(CROSS_COMPILE)cc)
|
||||||
|
$(call allow-override,LD,$(CROSS_COMPILE)ld)
|
||||||
|
|
||||||
TOPDIR = ..
|
TOPDIR = ..
|
||||||
|
|
||||||
@ -21,8 +35,9 @@ ALL_CFLAGS := $(INCLUDES)
|
|||||||
SHARED_CFLAGS += -fPIC -fvisibility=hidden -DSHARED
|
SHARED_CFLAGS += -fPIC -fvisibility=hidden -DSHARED
|
||||||
|
|
||||||
CFLAGS ?= -g -O2 -Werror -Wall -std=gnu89
|
CFLAGS ?= -g -O2 -Werror -Wall -std=gnu89
|
||||||
ALL_CFLAGS += $(CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
|
ALL_CFLAGS += $(CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 $(EXTRA_CFLAGS)
|
||||||
ALL_LDFLAGS += $(LDFLAGS)
|
ALL_LDFLAGS += $(LDFLAGS) $(EXTRA_LDFLAGS)
|
||||||
|
|
||||||
ifdef NO_PKG_CONFIG
|
ifdef NO_PKG_CONFIG
|
||||||
ALL_LDFLAGS += -lelf -lz
|
ALL_LDFLAGS += -lelf -lz
|
||||||
else
|
else
|
||||||
@ -35,9 +50,9 @@ OBJDIR ?= .
|
|||||||
SHARED_OBJDIR := $(OBJDIR)/sharedobjs
|
SHARED_OBJDIR := $(OBJDIR)/sharedobjs
|
||||||
STATIC_OBJDIR := $(OBJDIR)/staticobjs
|
STATIC_OBJDIR := $(OBJDIR)/staticobjs
|
||||||
OBJS := bpf.o btf.o libbpf.o libbpf_errno.o netlink.o \
|
OBJS := bpf.o btf.o libbpf.o libbpf_errno.o netlink.o \
|
||||||
nlattr.o str_error.o libbpf_probes.o bpf_prog_linfo.o xsk.o \
|
nlattr.o str_error.o libbpf_probes.o bpf_prog_linfo.o \
|
||||||
btf_dump.o hashmap.o ringbuf.o strset.o linker.o gen_loader.o \
|
btf_dump.o hashmap.o ringbuf.o strset.o linker.o gen_loader.o \
|
||||||
relo_core.o
|
relo_core.o usdt.o
|
||||||
SHARED_OBJS := $(addprefix $(SHARED_OBJDIR)/,$(OBJS))
|
SHARED_OBJS := $(addprefix $(SHARED_OBJDIR)/,$(OBJS))
|
||||||
STATIC_OBJS := $(addprefix $(STATIC_OBJDIR)/,$(OBJS))
|
STATIC_OBJS := $(addprefix $(STATIC_OBJDIR)/,$(OBJS))
|
||||||
|
|
||||||
@ -49,9 +64,10 @@ ifndef BUILD_STATIC_ONLY
|
|||||||
VERSION_SCRIPT := libbpf.map
|
VERSION_SCRIPT := libbpf.map
|
||||||
endif
|
endif
|
||||||
|
|
||||||
HEADERS := bpf.h libbpf.h btf.h libbpf_common.h libbpf_legacy.h xsk.h \
|
HEADERS := bpf.h libbpf.h btf.h libbpf_common.h libbpf_legacy.h \
|
||||||
bpf_helpers.h bpf_helper_defs.h bpf_tracing.h \
|
bpf_helpers.h bpf_helper_defs.h bpf_tracing.h \
|
||||||
bpf_endian.h bpf_core_read.h skel_internal.h libbpf_version.h
|
bpf_endian.h bpf_core_read.h skel_internal.h libbpf_version.h \
|
||||||
|
usdt.bpf.h
|
||||||
UAPI_HEADERS := $(addprefix $(TOPDIR)/include/uapi/linux/,\
|
UAPI_HEADERS := $(addprefix $(TOPDIR)/include/uapi/linux/,\
|
||||||
bpf.h bpf_common.h btf.h)
|
bpf.h bpf_common.h btf.h)
|
||||||
|
|
||||||
@ -61,7 +77,8 @@ INSTALL = install
|
|||||||
|
|
||||||
DESTDIR ?=
|
DESTDIR ?=
|
||||||
|
|
||||||
ifeq ($(filter-out %64 %64be %64eb %64le %64el s390x, $(shell uname -m)),)
|
HOSTARCH = $(firstword $(subst -, ,$(shell $(CC) -dumpmachine)))
|
||||||
|
ifeq ($(filter-out %64 %64be %64eb %64le %64el s390x, $(HOSTARCH)),)
|
||||||
LIBSUBDIR := lib64
|
LIBSUBDIR := lib64
|
||||||
else
|
else
|
||||||
LIBSUBDIR := lib
|
LIBSUBDIR := lib
|
||||||
@ -99,7 +116,7 @@ $(OBJDIR)/libbpf.so.$(LIBBPF_VERSION): $(SHARED_OBJS)
|
|||||||
-Wl,-soname,libbpf.so.$(LIBBPF_MAJOR_VERSION) \
|
-Wl,-soname,libbpf.so.$(LIBBPF_MAJOR_VERSION) \
|
||||||
$^ $(ALL_LDFLAGS) -o $@
|
$^ $(ALL_LDFLAGS) -o $@
|
||||||
|
|
||||||
$(OBJDIR)/libbpf.pc:
|
$(OBJDIR)/libbpf.pc: force
|
||||||
$(Q)sed -e "s|@PREFIX@|$(PREFIX)|" \
|
$(Q)sed -e "s|@PREFIX@|$(PREFIX)|" \
|
||||||
-e "s|@LIBDIR@|$(LIBDIR_PC)|" \
|
-e "s|@LIBDIR@|$(LIBDIR_PC)|" \
|
||||||
-e "s|@VERSION@|$(LIBBPF_VERSION)|" \
|
-e "s|@VERSION@|$(LIBBPF_VERSION)|" \
|
||||||
@ -152,7 +169,7 @@ clean:
|
|||||||
$(call msg,CLEAN)
|
$(call msg,CLEAN)
|
||||||
$(Q)rm -rf *.o *.a *.so *.so.* *.pc $(SHARED_OBJDIR) $(STATIC_OBJDIR)
|
$(Q)rm -rf *.o *.a *.so *.so.* *.pc $(SHARED_OBJDIR) $(STATIC_OBJDIR)
|
||||||
|
|
||||||
.PHONY: cscope tags
|
.PHONY: cscope tags force
|
||||||
cscope:
|
cscope:
|
||||||
$(call msg,CSCOPE)
|
$(call msg,CSCOPE)
|
||||||
$(Q)ls *.c *.h > cscope.files
|
$(Q)ls *.c *.h > cscope.files
|
||||||
@ -162,3 +179,5 @@ tags:
|
|||||||
$(call msg,CTAGS)
|
$(call msg,CTAGS)
|
||||||
$(Q)rm -f TAGS tags
|
$(Q)rm -f TAGS tags
|
||||||
$(Q)ls *.c *.h | xargs $(TAGS_PROG) -a
|
$(Q)ls *.c *.h | xargs $(TAGS_PROG) -a
|
||||||
|
|
||||||
|
force:
|
||||||
|
190
src/bpf.h
@ -61,48 +61,6 @@ LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
|
|||||||
__u32 max_entries,
|
__u32 max_entries,
|
||||||
const struct bpf_map_create_opts *opts);
|
const struct bpf_map_create_opts *opts);
|
||||||
|
|
||||||
struct bpf_create_map_attr {
|
|
||||||
const char *name;
|
|
||||||
enum bpf_map_type map_type;
|
|
||||||
__u32 map_flags;
|
|
||||||
__u32 key_size;
|
|
||||||
__u32 value_size;
|
|
||||||
__u32 max_entries;
|
|
||||||
__u32 numa_node;
|
|
||||||
__u32 btf_fd;
|
|
||||||
__u32 btf_key_type_id;
|
|
||||||
__u32 btf_value_type_id;
|
|
||||||
__u32 map_ifindex;
|
|
||||||
union {
|
|
||||||
__u32 inner_map_fd;
|
|
||||||
__u32 btf_vmlinux_value_type_id;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_map_create() instead")
|
|
||||||
LIBBPF_API int bpf_create_map_xattr(const struct bpf_create_map_attr *create_attr);
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_map_create() instead")
|
|
||||||
LIBBPF_API int bpf_create_map_node(enum bpf_map_type map_type, const char *name,
|
|
||||||
int key_size, int value_size,
|
|
||||||
int max_entries, __u32 map_flags, int node);
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_map_create() instead")
|
|
||||||
LIBBPF_API int bpf_create_map_name(enum bpf_map_type map_type, const char *name,
|
|
||||||
int key_size, int value_size,
|
|
||||||
int max_entries, __u32 map_flags);
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_map_create() instead")
|
|
||||||
LIBBPF_API int bpf_create_map(enum bpf_map_type map_type, int key_size,
|
|
||||||
int value_size, int max_entries, __u32 map_flags);
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_map_create() instead")
|
|
||||||
LIBBPF_API int bpf_create_map_in_map_node(enum bpf_map_type map_type,
|
|
||||||
const char *name, int key_size,
|
|
||||||
int inner_map_fd, int max_entries,
|
|
||||||
__u32 map_flags, int node);
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_map_create() instead")
|
|
||||||
LIBBPF_API int bpf_create_map_in_map(enum bpf_map_type map_type,
|
|
||||||
const char *name, int key_size,
|
|
||||||
int inner_map_fd, int max_entries,
|
|
||||||
__u32 map_flags);
|
|
||||||
|
|
||||||
struct bpf_prog_load_opts {
|
struct bpf_prog_load_opts {
|
||||||
size_t sz; /* size of this struct for forward/backward compatibility */
|
size_t sz; /* size of this struct for forward/backward compatibility */
|
||||||
|
|
||||||
@ -145,54 +103,6 @@ LIBBPF_API int bpf_prog_load(enum bpf_prog_type prog_type,
|
|||||||
const char *prog_name, const char *license,
|
const char *prog_name, const char *license,
|
||||||
const struct bpf_insn *insns, size_t insn_cnt,
|
const struct bpf_insn *insns, size_t insn_cnt,
|
||||||
const struct bpf_prog_load_opts *opts);
|
const struct bpf_prog_load_opts *opts);
|
||||||
/* this "specialization" should go away in libbpf 1.0 */
|
|
||||||
LIBBPF_API int bpf_prog_load_v0_6_0(enum bpf_prog_type prog_type,
|
|
||||||
const char *prog_name, const char *license,
|
|
||||||
const struct bpf_insn *insns, size_t insn_cnt,
|
|
||||||
const struct bpf_prog_load_opts *opts);
|
|
||||||
|
|
||||||
/* This is an elaborate way to not conflict with deprecated bpf_prog_load()
|
|
||||||
* API, defined in libbpf.h. Once we hit libbpf 1.0, all this will be gone.
|
|
||||||
* With this approach, if someone is calling bpf_prog_load() with
|
|
||||||
* 4 arguments, they will use the deprecated API, which keeps backwards
|
|
||||||
* compatibility (both source code and binary). If bpf_prog_load() is called
|
|
||||||
* with 6 arguments, though, it gets redirected to __bpf_prog_load.
|
|
||||||
* So looking forward to libbpf 1.0 when this hack will be gone and
|
|
||||||
* __bpf_prog_load() will be called just bpf_prog_load().
|
|
||||||
*/
|
|
||||||
#ifndef bpf_prog_load
|
|
||||||
#define bpf_prog_load(...) ___libbpf_overload(___bpf_prog_load, __VA_ARGS__)
|
|
||||||
#define ___bpf_prog_load4(file, type, pobj, prog_fd) \
|
|
||||||
bpf_prog_load_deprecated(file, type, pobj, prog_fd)
|
|
||||||
#define ___bpf_prog_load6(prog_type, prog_name, license, insns, insn_cnt, opts) \
|
|
||||||
bpf_prog_load(prog_type, prog_name, license, insns, insn_cnt, opts)
|
|
||||||
#endif /* bpf_prog_load */
|
|
||||||
|
|
||||||
struct bpf_load_program_attr {
|
|
||||||
enum bpf_prog_type prog_type;
|
|
||||||
enum bpf_attach_type expected_attach_type;
|
|
||||||
const char *name;
|
|
||||||
const struct bpf_insn *insns;
|
|
||||||
size_t insns_cnt;
|
|
||||||
const char *license;
|
|
||||||
union {
|
|
||||||
__u32 kern_version;
|
|
||||||
__u32 attach_prog_fd;
|
|
||||||
};
|
|
||||||
union {
|
|
||||||
__u32 prog_ifindex;
|
|
||||||
__u32 attach_btf_id;
|
|
||||||
};
|
|
||||||
__u32 prog_btf_fd;
|
|
||||||
__u32 func_info_rec_size;
|
|
||||||
const void *func_info;
|
|
||||||
__u32 func_info_cnt;
|
|
||||||
__u32 line_info_rec_size;
|
|
||||||
const void *line_info;
|
|
||||||
__u32 line_info_cnt;
|
|
||||||
__u32 log_level;
|
|
||||||
__u32 prog_flags;
|
|
||||||
};
|
|
||||||
|
|
||||||
/* Flags to direct loading requirements */
|
/* Flags to direct loading requirements */
|
||||||
#define MAPS_RELAX_COMPAT 0x01
|
#define MAPS_RELAX_COMPAT 0x01
|
||||||
@ -200,22 +110,6 @@ struct bpf_load_program_attr {
|
|||||||
/* Recommended log buffer size */
|
/* Recommended log buffer size */
|
||||||
#define BPF_LOG_BUF_SIZE (UINT32_MAX >> 8) /* verifier maximum in kernels <= 5.1 */
|
#define BPF_LOG_BUF_SIZE (UINT32_MAX >> 8) /* verifier maximum in kernels <= 5.1 */
|
||||||
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_prog_load() instead")
|
|
||||||
LIBBPF_API int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr,
|
|
||||||
char *log_buf, size_t log_buf_sz);
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_prog_load() instead")
|
|
||||||
LIBBPF_API int bpf_load_program(enum bpf_prog_type type,
|
|
||||||
const struct bpf_insn *insns, size_t insns_cnt,
|
|
||||||
const char *license, __u32 kern_version,
|
|
||||||
char *log_buf, size_t log_buf_sz);
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_prog_load() instead")
|
|
||||||
LIBBPF_API int bpf_verify_program(enum bpf_prog_type type,
|
|
||||||
const struct bpf_insn *insns,
|
|
||||||
size_t insns_cnt, __u32 prog_flags,
|
|
||||||
const char *license, __u32 kern_version,
|
|
||||||
char *log_buf, size_t log_buf_sz,
|
|
||||||
int log_level);
|
|
||||||
|
|
||||||
struct bpf_btf_load_opts {
|
struct bpf_btf_load_opts {
|
||||||
size_t sz; /* size of this struct for forward/backward compatibility */
|
size_t sz; /* size of this struct for forward/backward compatibility */
|
||||||
|
|
||||||
@ -229,10 +123,6 @@ struct bpf_btf_load_opts {
|
|||||||
LIBBPF_API int bpf_btf_load(const void *btf_data, size_t btf_size,
|
LIBBPF_API int bpf_btf_load(const void *btf_data, size_t btf_size,
|
||||||
const struct bpf_btf_load_opts *opts);
|
const struct bpf_btf_load_opts *opts);
|
||||||
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_btf_load() instead")
|
|
||||||
LIBBPF_API int bpf_load_btf(const void *btf, __u32 btf_size, char *log_buf,
|
|
||||||
__u32 log_buf_size, bool do_log);
|
|
||||||
|
|
||||||
LIBBPF_API int bpf_map_update_elem(int fd, const void *key, const void *value,
|
LIBBPF_API int bpf_map_update_elem(int fd, const void *key, const void *value,
|
||||||
__u64 flags);
|
__u64 flags);
|
||||||
|
|
||||||
@ -244,6 +134,7 @@ LIBBPF_API int bpf_map_lookup_and_delete_elem(int fd, const void *key,
|
|||||||
LIBBPF_API int bpf_map_lookup_and_delete_elem_flags(int fd, const void *key,
|
LIBBPF_API int bpf_map_lookup_and_delete_elem_flags(int fd, const void *key,
|
||||||
void *value, __u64 flags);
|
void *value, __u64 flags);
|
||||||
LIBBPF_API int bpf_map_delete_elem(int fd, const void *key);
|
LIBBPF_API int bpf_map_delete_elem(int fd, const void *key);
|
||||||
|
LIBBPF_API int bpf_map_delete_elem_flags(int fd, const void *key, __u64 flags);
|
||||||
LIBBPF_API int bpf_map_get_next_key(int fd, const void *key, void *next_key);
|
LIBBPF_API int bpf_map_get_next_key(int fd, const void *key, void *next_key);
|
||||||
LIBBPF_API int bpf_map_freeze(int fd);
|
LIBBPF_API int bpf_map_freeze(int fd);
|
||||||
|
|
||||||
@ -379,8 +270,19 @@ LIBBPF_API int bpf_map_update_batch(int fd, const void *keys, const void *values
|
|||||||
__u32 *count,
|
__u32 *count,
|
||||||
const struct bpf_map_batch_opts *opts);
|
const struct bpf_map_batch_opts *opts);
|
||||||
|
|
||||||
|
struct bpf_obj_get_opts {
|
||||||
|
size_t sz; /* size of this struct for forward/backward compatibility */
|
||||||
|
|
||||||
|
__u32 file_flags;
|
||||||
|
|
||||||
|
size_t :0;
|
||||||
|
};
|
||||||
|
#define bpf_obj_get_opts__last_field file_flags
|
||||||
|
|
||||||
LIBBPF_API int bpf_obj_pin(int fd, const char *pathname);
|
LIBBPF_API int bpf_obj_pin(int fd, const char *pathname);
|
||||||
LIBBPF_API int bpf_obj_get(const char *pathname);
|
LIBBPF_API int bpf_obj_get(const char *pathname);
|
||||||
|
LIBBPF_API int bpf_obj_get_opts(const char *pathname,
|
||||||
|
const struct bpf_obj_get_opts *opts);
|
||||||
|
|
||||||
struct bpf_prog_attach_opts {
|
struct bpf_prog_attach_opts {
|
||||||
size_t sz; /* size of this struct for forward/backward compatibility */
|
size_t sz; /* size of this struct for forward/backward compatibility */
|
||||||
@ -394,10 +296,6 @@ LIBBPF_API int bpf_prog_attach(int prog_fd, int attachable_fd,
|
|||||||
LIBBPF_API int bpf_prog_attach_opts(int prog_fd, int attachable_fd,
|
LIBBPF_API int bpf_prog_attach_opts(int prog_fd, int attachable_fd,
|
||||||
enum bpf_attach_type type,
|
enum bpf_attach_type type,
|
||||||
const struct bpf_prog_attach_opts *opts);
|
const struct bpf_prog_attach_opts *opts);
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_prog_attach_opts() instead")
|
|
||||||
LIBBPF_API int bpf_prog_attach_xattr(int prog_fd, int attachable_fd,
|
|
||||||
enum bpf_attach_type type,
|
|
||||||
const struct bpf_prog_attach_opts *opts);
|
|
||||||
LIBBPF_API int bpf_prog_detach(int attachable_fd, enum bpf_attach_type type);
|
LIBBPF_API int bpf_prog_detach(int attachable_fd, enum bpf_attach_type type);
|
||||||
LIBBPF_API int bpf_prog_detach2(int prog_fd, int attachable_fd,
|
LIBBPF_API int bpf_prog_detach2(int prog_fd, int attachable_fd,
|
||||||
enum bpf_attach_type type);
|
enum bpf_attach_type type);
|
||||||
@ -413,10 +311,20 @@ struct bpf_link_create_opts {
|
|||||||
struct {
|
struct {
|
||||||
__u64 bpf_cookie;
|
__u64 bpf_cookie;
|
||||||
} perf_event;
|
} perf_event;
|
||||||
|
struct {
|
||||||
|
__u32 flags;
|
||||||
|
__u32 cnt;
|
||||||
|
const char **syms;
|
||||||
|
const unsigned long *addrs;
|
||||||
|
const __u64 *cookies;
|
||||||
|
} kprobe_multi;
|
||||||
|
struct {
|
||||||
|
__u64 cookie;
|
||||||
|
} tracing;
|
||||||
};
|
};
|
||||||
size_t :0;
|
size_t :0;
|
||||||
};
|
};
|
||||||
#define bpf_link_create_opts__last_field perf_event
|
#define bpf_link_create_opts__last_field kprobe_multi.cookies
|
||||||
|
|
||||||
LIBBPF_API int bpf_link_create(int prog_fd, int target_fd,
|
LIBBPF_API int bpf_link_create(int prog_fd, int target_fd,
|
||||||
enum bpf_attach_type attach_type,
|
enum bpf_attach_type attach_type,
|
||||||
@ -453,36 +361,63 @@ struct bpf_prog_test_run_attr {
|
|||||||
* out: length of cxt_out */
|
* out: length of cxt_out */
|
||||||
};
|
};
|
||||||
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_prog_test_run_opts() instead")
|
|
||||||
LIBBPF_API int bpf_prog_test_run_xattr(struct bpf_prog_test_run_attr *test_attr);
|
|
||||||
|
|
||||||
/*
|
|
||||||
* bpf_prog_test_run does not check that data_out is large enough. Consider
|
|
||||||
* using bpf_prog_test_run_opts instead.
|
|
||||||
*/
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_prog_test_run_opts() instead")
|
|
||||||
LIBBPF_API int bpf_prog_test_run(int prog_fd, int repeat, void *data,
|
|
||||||
__u32 size, void *data_out, __u32 *size_out,
|
|
||||||
__u32 *retval, __u32 *duration);
|
|
||||||
LIBBPF_API int bpf_prog_get_next_id(__u32 start_id, __u32 *next_id);
|
LIBBPF_API int bpf_prog_get_next_id(__u32 start_id, __u32 *next_id);
|
||||||
LIBBPF_API int bpf_map_get_next_id(__u32 start_id, __u32 *next_id);
|
LIBBPF_API int bpf_map_get_next_id(__u32 start_id, __u32 *next_id);
|
||||||
LIBBPF_API int bpf_btf_get_next_id(__u32 start_id, __u32 *next_id);
|
LIBBPF_API int bpf_btf_get_next_id(__u32 start_id, __u32 *next_id);
|
||||||
LIBBPF_API int bpf_link_get_next_id(__u32 start_id, __u32 *next_id);
|
LIBBPF_API int bpf_link_get_next_id(__u32 start_id, __u32 *next_id);
|
||||||
|
|
||||||
|
struct bpf_get_fd_by_id_opts {
|
||||||
|
size_t sz; /* size of this struct for forward/backward compatibility */
|
||||||
|
__u32 open_flags; /* permissions requested for the operation on fd */
|
||||||
|
size_t :0;
|
||||||
|
};
|
||||||
|
#define bpf_get_fd_by_id_opts__last_field open_flags
|
||||||
|
|
||||||
LIBBPF_API int bpf_prog_get_fd_by_id(__u32 id);
|
LIBBPF_API int bpf_prog_get_fd_by_id(__u32 id);
|
||||||
|
LIBBPF_API int bpf_prog_get_fd_by_id_opts(__u32 id,
|
||||||
|
const struct bpf_get_fd_by_id_opts *opts);
|
||||||
LIBBPF_API int bpf_map_get_fd_by_id(__u32 id);
|
LIBBPF_API int bpf_map_get_fd_by_id(__u32 id);
|
||||||
|
LIBBPF_API int bpf_map_get_fd_by_id_opts(__u32 id,
|
||||||
|
const struct bpf_get_fd_by_id_opts *opts);
|
||||||
LIBBPF_API int bpf_btf_get_fd_by_id(__u32 id);
|
LIBBPF_API int bpf_btf_get_fd_by_id(__u32 id);
|
||||||
|
LIBBPF_API int bpf_btf_get_fd_by_id_opts(__u32 id,
|
||||||
|
const struct bpf_get_fd_by_id_opts *opts);
|
||||||
LIBBPF_API int bpf_link_get_fd_by_id(__u32 id);
|
LIBBPF_API int bpf_link_get_fd_by_id(__u32 id);
|
||||||
|
LIBBPF_API int bpf_link_get_fd_by_id_opts(__u32 id,
|
||||||
|
const struct bpf_get_fd_by_id_opts *opts);
|
||||||
LIBBPF_API int bpf_obj_get_info_by_fd(int bpf_fd, void *info, __u32 *info_len);
|
LIBBPF_API int bpf_obj_get_info_by_fd(int bpf_fd, void *info, __u32 *info_len);
|
||||||
|
|
||||||
|
struct bpf_prog_query_opts {
|
||||||
|
size_t sz; /* size of this struct for forward/backward compatibility */
|
||||||
|
__u32 query_flags;
|
||||||
|
__u32 attach_flags; /* output argument */
|
||||||
|
__u32 *prog_ids;
|
||||||
|
__u32 prog_cnt; /* input+output argument */
|
||||||
|
__u32 *prog_attach_flags;
|
||||||
|
};
|
||||||
|
#define bpf_prog_query_opts__last_field prog_attach_flags
|
||||||
|
|
||||||
|
LIBBPF_API int bpf_prog_query_opts(int target_fd,
|
||||||
|
enum bpf_attach_type type,
|
||||||
|
struct bpf_prog_query_opts *opts);
|
||||||
LIBBPF_API int bpf_prog_query(int target_fd, enum bpf_attach_type type,
|
LIBBPF_API int bpf_prog_query(int target_fd, enum bpf_attach_type type,
|
||||||
__u32 query_flags, __u32 *attach_flags,
|
__u32 query_flags, __u32 *attach_flags,
|
||||||
__u32 *prog_ids, __u32 *prog_cnt);
|
__u32 *prog_ids, __u32 *prog_cnt);
|
||||||
|
|
||||||
LIBBPF_API int bpf_raw_tracepoint_open(const char *name, int prog_fd);
|
LIBBPF_API int bpf_raw_tracepoint_open(const char *name, int prog_fd);
|
||||||
LIBBPF_API int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf,
|
LIBBPF_API int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf,
|
||||||
__u32 *buf_len, __u32 *prog_id, __u32 *fd_type,
|
__u32 *buf_len, __u32 *prog_id, __u32 *fd_type,
|
||||||
__u64 *probe_offset, __u64 *probe_addr);
|
__u64 *probe_offset, __u64 *probe_addr);
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
/* forward-declaring enums in C++ isn't compatible with pure C enums, so
|
||||||
|
* instead define bpf_enable_stats() as accepting int as an input
|
||||||
|
*/
|
||||||
|
LIBBPF_API int bpf_enable_stats(int type);
|
||||||
|
#else
|
||||||
enum bpf_stats_type; /* defined in up-to-date linux/bpf.h */
|
enum bpf_stats_type; /* defined in up-to-date linux/bpf.h */
|
||||||
LIBBPF_API int bpf_enable_stats(enum bpf_stats_type type);
|
LIBBPF_API int bpf_enable_stats(enum bpf_stats_type type);
|
||||||
|
#endif
|
||||||
|
|
||||||
struct bpf_prog_bind_opts {
|
struct bpf_prog_bind_opts {
|
||||||
size_t sz; /* size of this struct for forward/backward compatibility */
|
size_t sz; /* size of this struct for forward/backward compatibility */
|
||||||
@ -512,8 +447,9 @@ struct bpf_test_run_opts {
|
|||||||
__u32 duration; /* out: average per repetition in ns */
|
__u32 duration; /* out: average per repetition in ns */
|
||||||
__u32 flags;
|
__u32 flags;
|
||||||
__u32 cpu;
|
__u32 cpu;
|
||||||
|
__u32 batch_size;
|
||||||
};
|
};
|
||||||
#define bpf_test_run_opts__last_field cpu
|
#define bpf_test_run_opts__last_field batch_size
|
||||||
|
|
||||||
LIBBPF_API int bpf_prog_test_run_opts(int prog_fd,
|
LIBBPF_API int bpf_prog_test_run_opts(int prog_fd,
|
||||||
struct bpf_test_run_opts *opts);
|
struct bpf_test_run_opts *opts);
|
||||||
|
@ -29,6 +29,7 @@ enum bpf_type_id_kind {
|
|||||||
enum bpf_type_info_kind {
|
enum bpf_type_info_kind {
|
||||||
BPF_TYPE_EXISTS = 0, /* type existence in target kernel */
|
BPF_TYPE_EXISTS = 0, /* type existence in target kernel */
|
||||||
BPF_TYPE_SIZE = 1, /* type size in target kernel */
|
BPF_TYPE_SIZE = 1, /* type size in target kernel */
|
||||||
|
BPF_TYPE_MATCHES = 2, /* type match in target kernel */
|
||||||
};
|
};
|
||||||
|
|
||||||
/* second argument to __builtin_preserve_enum_value() built-in */
|
/* second argument to __builtin_preserve_enum_value() built-in */
|
||||||
@ -110,21 +111,50 @@ enum bpf_enum_value_kind {
|
|||||||
val; \
|
val; \
|
||||||
})
|
})
|
||||||
|
|
||||||
|
#define ___bpf_field_ref1(field) (field)
|
||||||
|
#define ___bpf_field_ref2(type, field) (((typeof(type) *)0)->field)
|
||||||
|
#define ___bpf_field_ref(args...) \
|
||||||
|
___bpf_apply(___bpf_field_ref, ___bpf_narg(args))(args)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Convenience macro to check that field actually exists in target kernel's.
|
* Convenience macro to check that field actually exists in target kernel's.
|
||||||
* Returns:
|
* Returns:
|
||||||
* 1, if matching field is present in target kernel;
|
* 1, if matching field is present in target kernel;
|
||||||
* 0, if no matching field found.
|
* 0, if no matching field found.
|
||||||
|
*
|
||||||
|
* Supports two forms:
|
||||||
|
* - field reference through variable access:
|
||||||
|
* bpf_core_field_exists(p->my_field);
|
||||||
|
* - field reference through type and field names:
|
||||||
|
* bpf_core_field_exists(struct my_type, my_field).
|
||||||
*/
|
*/
|
||||||
#define bpf_core_field_exists(field) \
|
#define bpf_core_field_exists(field...) \
|
||||||
__builtin_preserve_field_info(field, BPF_FIELD_EXISTS)
|
__builtin_preserve_field_info(___bpf_field_ref(field), BPF_FIELD_EXISTS)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Convenience macro to get the byte size of a field. Works for integers,
|
* Convenience macro to get the byte size of a field. Works for integers,
|
||||||
* struct/unions, pointers, arrays, and enums.
|
* struct/unions, pointers, arrays, and enums.
|
||||||
|
*
|
||||||
|
* Supports two forms:
|
||||||
|
* - field reference through variable access:
|
||||||
|
* bpf_core_field_size(p->my_field);
|
||||||
|
* - field reference through type and field names:
|
||||||
|
* bpf_core_field_size(struct my_type, my_field).
|
||||||
*/
|
*/
|
||||||
#define bpf_core_field_size(field) \
|
#define bpf_core_field_size(field...) \
|
||||||
__builtin_preserve_field_info(field, BPF_FIELD_BYTE_SIZE)
|
__builtin_preserve_field_info(___bpf_field_ref(field), BPF_FIELD_BYTE_SIZE)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Convenience macro to get field's byte offset.
|
||||||
|
*
|
||||||
|
* Supports two forms:
|
||||||
|
* - field reference through variable access:
|
||||||
|
* bpf_core_field_offset(p->my_field);
|
||||||
|
* - field reference through type and field names:
|
||||||
|
* bpf_core_field_offset(struct my_type, my_field).
|
||||||
|
*/
|
||||||
|
#define bpf_core_field_offset(field...) \
|
||||||
|
__builtin_preserve_field_info(___bpf_field_ref(field), BPF_FIELD_BYTE_OFFSET)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Convenience macro to get BTF type ID of a specified type, using a local BTF
|
* Convenience macro to get BTF type ID of a specified type, using a local BTF
|
||||||
@ -154,6 +184,16 @@ enum bpf_enum_value_kind {
|
|||||||
#define bpf_core_type_exists(type) \
|
#define bpf_core_type_exists(type) \
|
||||||
__builtin_preserve_type_info(*(typeof(type) *)0, BPF_TYPE_EXISTS)
|
__builtin_preserve_type_info(*(typeof(type) *)0, BPF_TYPE_EXISTS)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Convenience macro to check that provided named type
|
||||||
|
* (struct/union/enum/typedef) "matches" that in a target kernel.
|
||||||
|
* Returns:
|
||||||
|
* 1, if the type matches in the target kernel's BTF;
|
||||||
|
* 0, if the type does not match any in the target kernel
|
||||||
|
*/
|
||||||
|
#define bpf_core_type_matches(type) \
|
||||||
|
__builtin_preserve_type_info(*(typeof(type) *)0, BPF_TYPE_MATCHES)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Convenience macro to get the byte size of a provided named type
|
* Convenience macro to get the byte size of a provided named type
|
||||||
* (struct/union/enum/typedef) in a target kernel.
|
* (struct/union/enum/typedef) in a target kernel.
|
||||||
|
@ -29,6 +29,7 @@ struct tcp_request_sock;
|
|||||||
struct udp6_sock;
|
struct udp6_sock;
|
||||||
struct unix_sock;
|
struct unix_sock;
|
||||||
struct task_struct;
|
struct task_struct;
|
||||||
|
struct cgroup;
|
||||||
struct __sk_buff;
|
struct __sk_buff;
|
||||||
struct sk_msg_md;
|
struct sk_msg_md;
|
||||||
struct xdp_md;
|
struct xdp_md;
|
||||||
@ -38,6 +39,10 @@ struct inode;
|
|||||||
struct socket;
|
struct socket;
|
||||||
struct file;
|
struct file;
|
||||||
struct bpf_timer;
|
struct bpf_timer;
|
||||||
|
struct mptcp_sock;
|
||||||
|
struct bpf_dynptr;
|
||||||
|
struct iphdr;
|
||||||
|
struct ipv6hdr;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* bpf_map_lookup_elem
|
* bpf_map_lookup_elem
|
||||||
@ -532,6 +537,9 @@ static long (*bpf_skb_get_tunnel_key)(struct __sk_buff *skb, struct bpf_tunnel_k
|
|||||||
* sending the packet. This flag was added for GRE
|
* sending the packet. This flag was added for GRE
|
||||||
* encapsulation, but might be used with other protocols
|
* encapsulation, but might be used with other protocols
|
||||||
* as well in the future.
|
* as well in the future.
|
||||||
|
* **BPF_F_NO_TUNNEL_KEY**
|
||||||
|
* Add a flag to tunnel metadata indicating that no tunnel
|
||||||
|
* key should be set in the resulting tunnel header.
|
||||||
*
|
*
|
||||||
* Here is a typical usage on the transmit path:
|
* Here is a typical usage on the transmit path:
|
||||||
*
|
*
|
||||||
@ -961,8 +969,8 @@ static long (*bpf_probe_write_user)(void *dst, const void *src, __u32 len) = (vo
|
|||||||
* Returns
|
* Returns
|
||||||
* The return value depends on the result of the test, and can be:
|
* The return value depends on the result of the test, and can be:
|
||||||
*
|
*
|
||||||
* * 0, if current task belongs to the cgroup2.
|
* * 1, if current task belongs to the cgroup2.
|
||||||
* * 1, if current task does not belong to the cgroup2.
|
* * 0, if current task does not belong to the cgroup2.
|
||||||
* * A negative error code, if an error occurred.
|
* * A negative error code, if an error occurred.
|
||||||
*/
|
*/
|
||||||
static long (*bpf_current_task_under_cgroup)(void *map, __u32 index) = (void *) 37;
|
static long (*bpf_current_task_under_cgroup)(void *map, __u32 index) = (void *) 37;
|
||||||
@ -1001,7 +1009,8 @@ static long (*bpf_skb_change_tail)(struct __sk_buff *skb, __u32 len, __u64 flags
|
|||||||
* Pull in non-linear data in case the *skb* is non-linear and not
|
* Pull in non-linear data in case the *skb* is non-linear and not
|
||||||
* all of *len* are part of the linear section. Make *len* bytes
|
* all of *len* are part of the linear section. Make *len* bytes
|
||||||
* from *skb* readable and writable. If a zero value is passed for
|
* from *skb* readable and writable. If a zero value is passed for
|
||||||
* *len*, then the whole length of the *skb* is pulled.
|
* *len*, then all bytes in the linear part of *skb* will be made
|
||||||
|
* readable and writable.
|
||||||
*
|
*
|
||||||
* This helper is only needed for reading and writing with direct
|
* This helper is only needed for reading and writing with direct
|
||||||
* packet access.
|
* packet access.
|
||||||
@ -1204,14 +1213,19 @@ static long (*bpf_set_hash)(struct __sk_buff *skb, __u32 hash) = (void *) 48;
|
|||||||
* * **SOL_SOCKET**, which supports the following *optname*\ s:
|
* * **SOL_SOCKET**, which supports the following *optname*\ s:
|
||||||
* **SO_RCVBUF**, **SO_SNDBUF**, **SO_MAX_PACING_RATE**,
|
* **SO_RCVBUF**, **SO_SNDBUF**, **SO_MAX_PACING_RATE**,
|
||||||
* **SO_PRIORITY**, **SO_RCVLOWAT**, **SO_MARK**,
|
* **SO_PRIORITY**, **SO_RCVLOWAT**, **SO_MARK**,
|
||||||
* **SO_BINDTODEVICE**, **SO_KEEPALIVE**.
|
* **SO_BINDTODEVICE**, **SO_KEEPALIVE**, **SO_REUSEADDR**,
|
||||||
|
* **SO_REUSEPORT**, **SO_BINDTOIFINDEX**, **SO_TXREHASH**.
|
||||||
* * **IPPROTO_TCP**, which supports the following *optname*\ s:
|
* * **IPPROTO_TCP**, which supports the following *optname*\ s:
|
||||||
* **TCP_CONGESTION**, **TCP_BPF_IW**,
|
* **TCP_CONGESTION**, **TCP_BPF_IW**,
|
||||||
* **TCP_BPF_SNDCWND_CLAMP**, **TCP_SAVE_SYN**,
|
* **TCP_BPF_SNDCWND_CLAMP**, **TCP_SAVE_SYN**,
|
||||||
* **TCP_KEEPIDLE**, **TCP_KEEPINTVL**, **TCP_KEEPCNT**,
|
* **TCP_KEEPIDLE**, **TCP_KEEPINTVL**, **TCP_KEEPCNT**,
|
||||||
* **TCP_SYNCNT**, **TCP_USER_TIMEOUT**, **TCP_NOTSENT_LOWAT**.
|
* **TCP_SYNCNT**, **TCP_USER_TIMEOUT**, **TCP_NOTSENT_LOWAT**,
|
||||||
|
* **TCP_NODELAY**, **TCP_MAXSEG**, **TCP_WINDOW_CLAMP**,
|
||||||
|
* **TCP_THIN_LINEAR_TIMEOUTS**, **TCP_BPF_DELACK_MAX**,
|
||||||
|
* **TCP_BPF_RTO_MIN**.
|
||||||
* * **IPPROTO_IP**, which supports *optname* **IP_TOS**.
|
* * **IPPROTO_IP**, which supports *optname* **IP_TOS**.
|
||||||
* * **IPPROTO_IPV6**, which supports *optname* **IPV6_TCLASS**.
|
* * **IPPROTO_IPV6**, which supports the following *optname*\ s:
|
||||||
|
* **IPV6_TCLASS**, **IPV6_AUTOFLOWLABEL**.
|
||||||
*
|
*
|
||||||
* Returns
|
* Returns
|
||||||
* 0 on success, or a negative error in case of failure.
|
* 0 on success, or a negative error in case of failure.
|
||||||
@ -1234,10 +1248,12 @@ static long (*bpf_setsockopt)(void *bpf_socket, int level, int optname, void *op
|
|||||||
* There are two supported modes at this time:
|
* There are two supported modes at this time:
|
||||||
*
|
*
|
||||||
* * **BPF_ADJ_ROOM_MAC**: Adjust room at the mac layer
|
* * **BPF_ADJ_ROOM_MAC**: Adjust room at the mac layer
|
||||||
* (room space is added or removed below the layer 2 header).
|
* (room space is added or removed between the layer 2 and
|
||||||
|
* layer 3 headers).
|
||||||
*
|
*
|
||||||
* * **BPF_ADJ_ROOM_NET**: Adjust room at the network layer
|
* * **BPF_ADJ_ROOM_NET**: Adjust room at the network layer
|
||||||
* (room space is added or removed below the layer 3 header).
|
* (room space is added or removed between the layer 3 and
|
||||||
|
* layer 4 headers).
|
||||||
*
|
*
|
||||||
* The following flags are supported at this time:
|
* The following flags are supported at this time:
|
||||||
*
|
*
|
||||||
@ -1299,7 +1315,7 @@ static long (*bpf_skb_adjust_room)(struct __sk_buff *skb, __s32 len_diff, __u32
|
|||||||
* **XDP_REDIRECT** on success, or the value of the two lower bits
|
* **XDP_REDIRECT** on success, or the value of the two lower bits
|
||||||
* of the *flags* argument on error.
|
* of the *flags* argument on error.
|
||||||
*/
|
*/
|
||||||
static long (*bpf_redirect_map)(void *map, __u32 key, __u64 flags) = (void *) 51;
|
static long (*bpf_redirect_map)(void *map, __u64 key, __u64 flags) = (void *) 51;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* bpf_sk_redirect_map
|
* bpf_sk_redirect_map
|
||||||
@ -1458,12 +1474,10 @@ static long (*bpf_perf_prog_read_value)(struct bpf_perf_event_data *ctx, struct
|
|||||||
* and **BPF_CGROUP_INET6_CONNECT**.
|
* and **BPF_CGROUP_INET6_CONNECT**.
|
||||||
*
|
*
|
||||||
* This helper actually implements a subset of **getsockopt()**.
|
* This helper actually implements a subset of **getsockopt()**.
|
||||||
* It supports the following *level*\ s:
|
* It supports the same set of *optname*\ s that is supported by
|
||||||
*
|
* the **bpf_setsockopt**\ () helper. The exceptions are
|
||||||
* * **IPPROTO_TCP**, which supports *optname*
|
* **TCP_BPF_*** is **bpf_setsockopt**\ () only and
|
||||||
* **TCP_CONGESTION**.
|
* **TCP_SAVED_SYN** is **bpf_getsockopt**\ () only.
|
||||||
* * **IPPROTO_IP**, which supports *optname* **IP_TOS**.
|
|
||||||
* * **IPPROTO_IPV6**, which supports *optname* **IPV6_TCLASS**.
|
|
||||||
*
|
*
|
||||||
* Returns
|
* Returns
|
||||||
* 0 on success, or a negative error in case of failure.
|
* 0 on success, or a negative error in case of failure.
|
||||||
@ -1737,8 +1751,18 @@ static long (*bpf_skb_get_xfrm_state)(struct __sk_buff *skb, __u32 index, struct
|
|||||||
* **BPF_F_USER_STACK**
|
* **BPF_F_USER_STACK**
|
||||||
* Collect a user space stack instead of a kernel stack.
|
* Collect a user space stack instead of a kernel stack.
|
||||||
* **BPF_F_USER_BUILD_ID**
|
* **BPF_F_USER_BUILD_ID**
|
||||||
* Collect buildid+offset instead of ips for user stack,
|
* Collect (build_id, file_offset) instead of ips for user
|
||||||
* only valid if **BPF_F_USER_STACK** is also specified.
|
* stack, only valid if **BPF_F_USER_STACK** is also
|
||||||
|
* specified.
|
||||||
|
*
|
||||||
|
* *file_offset* is an offset relative to the beginning
|
||||||
|
* of the executable or shared object file backing the vma
|
||||||
|
* which the *ip* falls in. It is *not* an offset relative
|
||||||
|
* to that object's base address. Accordingly, it must be
|
||||||
|
* adjusted by adding (sh_addr - sh_offset), where
|
||||||
|
* sh_{addr,offset} correspond to the executable section
|
||||||
|
* containing *file_offset* in the object, for comparisons
|
||||||
|
* to symbols' st_value to be valid.
|
||||||
*
|
*
|
||||||
* **bpf_get_stack**\ () can collect up to
|
* **bpf_get_stack**\ () can collect up to
|
||||||
* **PERF_MAX_STACK_DEPTH** both kernel and user frames, subject
|
* **PERF_MAX_STACK_DEPTH** both kernel and user frames, subject
|
||||||
@ -1752,8 +1776,8 @@ static long (*bpf_skb_get_xfrm_state)(struct __sk_buff *skb, __u32 index, struct
|
|||||||
* # sysctl kernel.perf_event_max_stack=<new value>
|
* # sysctl kernel.perf_event_max_stack=<new value>
|
||||||
*
|
*
|
||||||
* Returns
|
* Returns
|
||||||
* A non-negative value equal to or less than *size* on success,
|
* The non-negative copied *buf* length equal to or less than
|
||||||
* or a negative error in case of failure.
|
* *size* on success, or a negative error in case of failure.
|
||||||
*/
|
*/
|
||||||
static long (*bpf_get_stack)(void *ctx, void *buf, __u32 size, __u64 flags) = (void *) 67;
|
static long (*bpf_get_stack)(void *ctx, void *buf, __u32 size, __u64 flags) = (void *) 67;
|
||||||
|
|
||||||
@ -2461,10 +2485,11 @@ static struct bpf_sock *(*bpf_skc_lookup_tcp)(void *ctx, struct bpf_sock_tuple *
|
|||||||
*
|
*
|
||||||
* *iph* points to the start of the IPv4 or IPv6 header, while
|
* *iph* points to the start of the IPv4 or IPv6 header, while
|
||||||
* *iph_len* contains **sizeof**\ (**struct iphdr**) or
|
* *iph_len* contains **sizeof**\ (**struct iphdr**) or
|
||||||
* **sizeof**\ (**struct ip6hdr**).
|
* **sizeof**\ (**struct ipv6hdr**).
|
||||||
*
|
*
|
||||||
* *th* points to the start of the TCP header, while *th_len*
|
* *th* points to the start of the TCP header, while *th_len*
|
||||||
* contains **sizeof**\ (**struct tcphdr**).
|
* contains the length of the TCP header (at least
|
||||||
|
* **sizeof**\ (**struct tcphdr**)).
|
||||||
*
|
*
|
||||||
* Returns
|
* Returns
|
||||||
* 0 if *iph* and *th* are a valid SYN cookie ACK, or a negative
|
* 0 if *iph* and *th* are a valid SYN cookie ACK, or a negative
|
||||||
@ -2687,10 +2712,11 @@ static long (*bpf_send_signal)(__u32 sig) = (void *) 109;
|
|||||||
*
|
*
|
||||||
* *iph* points to the start of the IPv4 or IPv6 header, while
|
* *iph* points to the start of the IPv4 or IPv6 header, while
|
||||||
* *iph_len* contains **sizeof**\ (**struct iphdr**) or
|
* *iph_len* contains **sizeof**\ (**struct iphdr**) or
|
||||||
* **sizeof**\ (**struct ip6hdr**).
|
* **sizeof**\ (**struct ipv6hdr**).
|
||||||
*
|
*
|
||||||
* *th* points to the start of the TCP header, while *th_len*
|
* *th* points to the start of the TCP header, while *th_len*
|
||||||
* contains the length of the TCP header.
|
* contains the length of the TCP header with options (at least
|
||||||
|
* **sizeof**\ (**struct tcphdr**)).
|
||||||
*
|
*
|
||||||
* Returns
|
* Returns
|
||||||
* On success, lower 32 bits hold the generated SYN cookie in
|
* On success, lower 32 bits hold the generated SYN cookie in
|
||||||
@ -3305,8 +3331,8 @@ static struct udp6_sock *(*bpf_skc_to_udp6_sock)(void *sk) = (void *) 140;
|
|||||||
* # sysctl kernel.perf_event_max_stack=<new value>
|
* # sysctl kernel.perf_event_max_stack=<new value>
|
||||||
*
|
*
|
||||||
* Returns
|
* Returns
|
||||||
* A non-negative value equal to or less than *size* on success,
|
* The non-negative copied *buf* length equal to or less than
|
||||||
* or a negative error in case of failure.
|
* *size* on success, or a negative error in case of failure.
|
||||||
*/
|
*/
|
||||||
static long (*bpf_get_task_stack)(struct task_struct *task, void *buf, __u32 size, __u64 flags) = (void *) 141;
|
static long (*bpf_get_task_stack)(struct task_struct *task, void *buf, __u32 size, __u64 flags) = (void *) 141;
|
||||||
|
|
||||||
@ -3407,7 +3433,7 @@ static long (*bpf_load_hdr_opt)(struct bpf_sock_ops *skops, void *searchby_res,
|
|||||||
*
|
*
|
||||||
* **-EEXIST** if the option already exists.
|
* **-EEXIST** if the option already exists.
|
||||||
*
|
*
|
||||||
* **-EFAULT** on failrue to parse the existing header options.
|
* **-EFAULT** on failure to parse the existing header options.
|
||||||
*
|
*
|
||||||
* **-EPERM** if the helper cannot be used under the current
|
* **-EPERM** if the helper cannot be used under the current
|
||||||
* *skops*\ **->op**.
|
* *skops*\ **->op**.
|
||||||
@ -3667,7 +3693,7 @@ static long (*bpf_redirect_peer)(__u32 ifindex, __u64 flags) = (void *) 155;
|
|||||||
* a *map* with *task* as the **key**. From this
|
* a *map* with *task* as the **key**. From this
|
||||||
* perspective, the usage is not much different from
|
* perspective, the usage is not much different from
|
||||||
* **bpf_map_lookup_elem**\ (*map*, **&**\ *task*) except this
|
* **bpf_map_lookup_elem**\ (*map*, **&**\ *task*) except this
|
||||||
* helper enforces the key must be an task_struct and the map must also
|
* helper enforces the key must be a task_struct and the map must also
|
||||||
* be a **BPF_MAP_TYPE_TASK_STORAGE**.
|
* be a **BPF_MAP_TYPE_TASK_STORAGE**.
|
||||||
*
|
*
|
||||||
* Underneath, the value is stored locally at *task* instead of
|
* Underneath, the value is stored locally at *task* instead of
|
||||||
@ -3745,7 +3771,7 @@ static __u64 (*bpf_ktime_get_coarse_ns)(void) = (void *) 160;
|
|||||||
/*
|
/*
|
||||||
* bpf_ima_inode_hash
|
* bpf_ima_inode_hash
|
||||||
*
|
*
|
||||||
* Returns the stored IMA hash of the *inode* (if it's avaialable).
|
* Returns the stored IMA hash of the *inode* (if it's available).
|
||||||
* If the hash is larger than *size*, then only *size*
|
* If the hash is larger than *size*, then only *size*
|
||||||
* bytes will be copied to *dst*
|
* bytes will be copied to *dst*
|
||||||
*
|
*
|
||||||
@ -3777,12 +3803,12 @@ static struct socket *(*bpf_sock_from_file)(struct file *file) = (void *) 162;
|
|||||||
*
|
*
|
||||||
* The argument *len_diff* can be used for querying with a planned
|
* The argument *len_diff* can be used for querying with a planned
|
||||||
* size change. This allows to check MTU prior to changing packet
|
* size change. This allows to check MTU prior to changing packet
|
||||||
* ctx. Providing an *len_diff* adjustment that is larger than the
|
* ctx. Providing a *len_diff* adjustment that is larger than the
|
||||||
* actual packet size (resulting in negative packet size) will in
|
* actual packet size (resulting in negative packet size) will in
|
||||||
* principle not exceed the MTU, why it is not considered a
|
* principle not exceed the MTU, which is why it is not considered
|
||||||
* failure. Other BPF-helpers are needed for performing the
|
* a failure. Other BPF helpers are needed for performing the
|
||||||
* planned size change, why the responsability for catch a negative
|
* planned size change; therefore the responsibility for catching
|
||||||
* packet size belong in those helpers.
|
* a negative packet size belongs in those helpers.
|
||||||
*
|
*
|
||||||
* Specifying *ifindex* zero means the MTU check is performed
|
* Specifying *ifindex* zero means the MTU check is performed
|
||||||
* against the current net device. This is practical if this isn't
|
* against the current net device. This is practical if this isn't
|
||||||
@ -4021,6 +4047,7 @@ static long (*bpf_timer_cancel)(struct bpf_timer *timer) = (void *) 172;
|
|||||||
*
|
*
|
||||||
* Returns
|
* Returns
|
||||||
* Address of the traced function.
|
* Address of the traced function.
|
||||||
|
* 0 for kprobes placed within the function (not at the entry).
|
||||||
*/
|
*/
|
||||||
static __u64 (*bpf_get_func_ip)(void *ctx) = (void *) 173;
|
static __u64 (*bpf_get_func_ip)(void *ctx) = (void *) 173;
|
||||||
|
|
||||||
@ -4189,13 +4216,13 @@ static long (*bpf_strncmp)(const char *s1, __u32 s1_sz, const char *s2) = (void
|
|||||||
/*
|
/*
|
||||||
* bpf_get_func_arg
|
* bpf_get_func_arg
|
||||||
*
|
*
|
||||||
* Get **n**-th argument (zero based) of the traced function (for tracing programs)
|
* Get **n**-th argument register (zero based) of the traced function (for tracing programs)
|
||||||
* returned in **value**.
|
* returned in **value**.
|
||||||
*
|
*
|
||||||
*
|
*
|
||||||
* Returns
|
* Returns
|
||||||
* 0 on success.
|
* 0 on success.
|
||||||
* **-EINVAL** if n >= arguments count of traced function.
|
* **-EINVAL** if n >= argument register count of traced function.
|
||||||
*/
|
*/
|
||||||
static long (*bpf_get_func_arg)(void *ctx, __u32 n, __u64 *value) = (void *) 183;
|
static long (*bpf_get_func_arg)(void *ctx, __u32 n, __u64 *value) = (void *) 183;
|
||||||
|
|
||||||
@ -4215,32 +4242,45 @@ static long (*bpf_get_func_ret)(void *ctx, __u64 *value) = (void *) 184;
|
|||||||
/*
|
/*
|
||||||
* bpf_get_func_arg_cnt
|
* bpf_get_func_arg_cnt
|
||||||
*
|
*
|
||||||
* Get number of arguments of the traced function (for tracing programs).
|
* Get number of registers of the traced function (for tracing programs) where
|
||||||
|
* function arguments are stored in these registers.
|
||||||
*
|
*
|
||||||
*
|
*
|
||||||
* Returns
|
* Returns
|
||||||
* The number of arguments of the traced function.
|
* The number of argument registers of the traced function.
|
||||||
*/
|
*/
|
||||||
static long (*bpf_get_func_arg_cnt)(void *ctx) = (void *) 185;
|
static long (*bpf_get_func_arg_cnt)(void *ctx) = (void *) 185;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* bpf_get_retval
|
* bpf_get_retval
|
||||||
*
|
*
|
||||||
* Get the syscall's return value that will be returned to userspace.
|
* Get the BPF program's return value that will be returned to the upper layers.
|
||||||
*
|
*
|
||||||
* This helper is currently supported by cgroup programs only.
|
* This helper is currently supported by cgroup programs and only by the hooks
|
||||||
|
* where BPF program's return value is returned to the userspace via errno.
|
||||||
*
|
*
|
||||||
* Returns
|
* Returns
|
||||||
* The syscall's return value.
|
* The BPF program's return value.
|
||||||
*/
|
*/
|
||||||
static int (*bpf_get_retval)(void) = (void *) 186;
|
static int (*bpf_get_retval)(void) = (void *) 186;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* bpf_set_retval
|
* bpf_set_retval
|
||||||
*
|
*
|
||||||
* Set the syscall's return value that will be returned to userspace.
|
* Set the BPF program's return value that will be returned to the upper layers.
|
||||||
|
*
|
||||||
|
* This helper is currently supported by cgroup programs and only by the hooks
|
||||||
|
* where BPF program's return value is returned to the userspace via errno.
|
||||||
|
*
|
||||||
|
* Note that there is the following corner case where the program exports an error
|
||||||
|
* via bpf_set_retval but signals success via 'return 1':
|
||||||
|
*
|
||||||
|
* bpf_set_retval(-EPERM);
|
||||||
|
* return 1;
|
||||||
|
*
|
||||||
|
* In this case, the BPF program's return value will use helper's -EPERM. This
|
||||||
|
* still holds true for cgroup/bind{4,6} which supports extra 'return 3' success case.
|
||||||
*
|
*
|
||||||
* This helper is currently supported by cgroup programs only.
|
|
||||||
*
|
*
|
||||||
* Returns
|
* Returns
|
||||||
* 0 on success, or a negative error in case of failure.
|
* 0 on success, or a negative error in case of failure.
|
||||||
@ -4295,4 +4335,384 @@ static long (*bpf_xdp_store_bytes)(struct xdp_md *xdp_md, __u32 offset, void *bu
|
|||||||
*/
|
*/
|
||||||
static long (*bpf_copy_from_user_task)(void *dst, __u32 size, const void *user_ptr, struct task_struct *tsk, __u64 flags) = (void *) 191;
|
static long (*bpf_copy_from_user_task)(void *dst, __u32 size, const void *user_ptr, struct task_struct *tsk, __u64 flags) = (void *) 191;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_skb_set_tstamp
|
||||||
|
*
|
||||||
|
* Change the __sk_buff->tstamp_type to *tstamp_type*
|
||||||
|
* and set *tstamp* to the __sk_buff->tstamp together.
|
||||||
|
*
|
||||||
|
* If there is no need to change the __sk_buff->tstamp_type,
|
||||||
|
* the tstamp value can be directly written to __sk_buff->tstamp
|
||||||
|
* instead.
|
||||||
|
*
|
||||||
|
* BPF_SKB_TSTAMP_DELIVERY_MONO is the only tstamp that
|
||||||
|
* will be kept during bpf_redirect_*(). A non zero
|
||||||
|
* *tstamp* must be used with the BPF_SKB_TSTAMP_DELIVERY_MONO
|
||||||
|
* *tstamp_type*.
|
||||||
|
*
|
||||||
|
* A BPF_SKB_TSTAMP_UNSPEC *tstamp_type* can only be used
|
||||||
|
* with a zero *tstamp*.
|
||||||
|
*
|
||||||
|
* Only IPv4 and IPv6 skb->protocol are supported.
|
||||||
|
*
|
||||||
|
* This function is most useful when it needs to set a
|
||||||
|
* mono delivery time to __sk_buff->tstamp and then
|
||||||
|
* bpf_redirect_*() to the egress of an iface. For example,
|
||||||
|
* changing the (rcv) timestamp in __sk_buff->tstamp at
|
||||||
|
* ingress to a mono delivery time and then bpf_redirect_*()
|
||||||
|
* to sch_fq@phy-dev.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* 0 on success.
|
||||||
|
* **-EINVAL** for invalid input
|
||||||
|
* **-EOPNOTSUPP** for unsupported protocol
|
||||||
|
*/
|
||||||
|
static long (*bpf_skb_set_tstamp)(struct __sk_buff *skb, __u64 tstamp, __u32 tstamp_type) = (void *) 192;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_ima_file_hash
|
||||||
|
*
|
||||||
|
* Returns a calculated IMA hash of the *file*.
|
||||||
|
* If the hash is larger than *size*, then only *size*
|
||||||
|
* bytes will be copied to *dst*
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* The **hash_algo** is returned on success,
|
||||||
|
* **-EOPNOTSUP** if the hash calculation failed or **-EINVAL** if
|
||||||
|
* invalid arguments are passed.
|
||||||
|
*/
|
||||||
|
static long (*bpf_ima_file_hash)(struct file *file, void *dst, __u32 size) = (void *) 193;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_kptr_xchg
|
||||||
|
*
|
||||||
|
* Exchange kptr at pointer *map_value* with *ptr*, and return the
|
||||||
|
* old value. *ptr* can be NULL, otherwise it must be a referenced
|
||||||
|
* pointer which will be released when this helper is called.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* The old value of kptr (which can be NULL). The returned pointer
|
||||||
|
* if not NULL, is a reference which must be released using its
|
||||||
|
* corresponding release function, or moved into a BPF map before
|
||||||
|
* program exit.
|
||||||
|
*/
|
||||||
|
static void *(*bpf_kptr_xchg)(void *map_value, void *ptr) = (void *) 194;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_map_lookup_percpu_elem
|
||||||
|
*
|
||||||
|
* Perform a lookup in *percpu map* for an entry associated to
|
||||||
|
* *key* on *cpu*.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* Map value associated to *key* on *cpu*, or **NULL** if no entry
|
||||||
|
* was found or *cpu* is invalid.
|
||||||
|
*/
|
||||||
|
static void *(*bpf_map_lookup_percpu_elem)(void *map, const void *key, __u32 cpu) = (void *) 195;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_skc_to_mptcp_sock
|
||||||
|
*
|
||||||
|
* Dynamically cast a *sk* pointer to a *mptcp_sock* pointer.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* *sk* if casting is valid, or **NULL** otherwise.
|
||||||
|
*/
|
||||||
|
static struct mptcp_sock *(*bpf_skc_to_mptcp_sock)(void *sk) = (void *) 196;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_dynptr_from_mem
|
||||||
|
*
|
||||||
|
* Get a dynptr to local memory *data*.
|
||||||
|
*
|
||||||
|
* *data* must be a ptr to a map value.
|
||||||
|
* The maximum *size* supported is DYNPTR_MAX_SIZE.
|
||||||
|
* *flags* is currently unused.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* 0 on success, -E2BIG if the size exceeds DYNPTR_MAX_SIZE,
|
||||||
|
* -EINVAL if flags is not 0.
|
||||||
|
*/
|
||||||
|
static long (*bpf_dynptr_from_mem)(void *data, __u32 size, __u64 flags, struct bpf_dynptr *ptr) = (void *) 197;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_ringbuf_reserve_dynptr
|
||||||
|
*
|
||||||
|
* Reserve *size* bytes of payload in a ring buffer *ringbuf*
|
||||||
|
* through the dynptr interface. *flags* must be 0.
|
||||||
|
*
|
||||||
|
* Please note that a corresponding bpf_ringbuf_submit_dynptr or
|
||||||
|
* bpf_ringbuf_discard_dynptr must be called on *ptr*, even if the
|
||||||
|
* reservation fails. This is enforced by the verifier.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* 0 on success, or a negative error in case of failure.
|
||||||
|
*/
|
||||||
|
static long (*bpf_ringbuf_reserve_dynptr)(void *ringbuf, __u32 size, __u64 flags, struct bpf_dynptr *ptr) = (void *) 198;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_ringbuf_submit_dynptr
|
||||||
|
*
|
||||||
|
* Submit reserved ring buffer sample, pointed to by *data*,
|
||||||
|
* through the dynptr interface. This is a no-op if the dynptr is
|
||||||
|
* invalid/null.
|
||||||
|
*
|
||||||
|
* For more information on *flags*, please see
|
||||||
|
* 'bpf_ringbuf_submit'.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* Nothing. Always succeeds.
|
||||||
|
*/
|
||||||
|
static void (*bpf_ringbuf_submit_dynptr)(struct bpf_dynptr *ptr, __u64 flags) = (void *) 199;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_ringbuf_discard_dynptr
|
||||||
|
*
|
||||||
|
* Discard reserved ring buffer sample through the dynptr
|
||||||
|
* interface. This is a no-op if the dynptr is invalid/null.
|
||||||
|
*
|
||||||
|
* For more information on *flags*, please see
|
||||||
|
* 'bpf_ringbuf_discard'.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* Nothing. Always succeeds.
|
||||||
|
*/
|
||||||
|
static void (*bpf_ringbuf_discard_dynptr)(struct bpf_dynptr *ptr, __u64 flags) = (void *) 200;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_dynptr_read
|
||||||
|
*
|
||||||
|
* Read *len* bytes from *src* into *dst*, starting from *offset*
|
||||||
|
* into *src*.
|
||||||
|
* *flags* is currently unused.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* 0 on success, -E2BIG if *offset* + *len* exceeds the length
|
||||||
|
* of *src*'s data, -EINVAL if *src* is an invalid dynptr or if
|
||||||
|
* *flags* is not 0.
|
||||||
|
*/
|
||||||
|
static long (*bpf_dynptr_read)(void *dst, __u32 len, const struct bpf_dynptr *src, __u32 offset, __u64 flags) = (void *) 201;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_dynptr_write
|
||||||
|
*
|
||||||
|
* Write *len* bytes from *src* into *dst*, starting from *offset*
|
||||||
|
* into *dst*.
|
||||||
|
* *flags* is currently unused.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* 0 on success, -E2BIG if *offset* + *len* exceeds the length
|
||||||
|
* of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
|
||||||
|
* is a read-only dynptr or if *flags* is not 0.
|
||||||
|
*/
|
||||||
|
static long (*bpf_dynptr_write)(const struct bpf_dynptr *dst, __u32 offset, void *src, __u32 len, __u64 flags) = (void *) 202;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_dynptr_data
|
||||||
|
*
|
||||||
|
* Get a pointer to the underlying dynptr data.
|
||||||
|
*
|
||||||
|
* *len* must be a statically known value. The returned data slice
|
||||||
|
* is invalidated whenever the dynptr is invalidated.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* Pointer to the underlying dynptr data, NULL if the dynptr is
|
||||||
|
* read-only, if the dynptr is invalid, or if the offset and length
|
||||||
|
* is out of bounds.
|
||||||
|
*/
|
||||||
|
static void *(*bpf_dynptr_data)(const struct bpf_dynptr *ptr, __u32 offset, __u32 len) = (void *) 203;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_tcp_raw_gen_syncookie_ipv4
|
||||||
|
*
|
||||||
|
* Try to issue a SYN cookie for the packet with corresponding
|
||||||
|
* IPv4/TCP headers, *iph* and *th*, without depending on a
|
||||||
|
* listening socket.
|
||||||
|
*
|
||||||
|
* *iph* points to the IPv4 header.
|
||||||
|
*
|
||||||
|
* *th* points to the start of the TCP header, while *th_len*
|
||||||
|
* contains the length of the TCP header (at least
|
||||||
|
* **sizeof**\ (**struct tcphdr**)).
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* On success, lower 32 bits hold the generated SYN cookie in
|
||||||
|
* followed by 16 bits which hold the MSS value for that cookie,
|
||||||
|
* and the top 16 bits are unused.
|
||||||
|
*
|
||||||
|
* On failure, the returned value is one of the following:
|
||||||
|
*
|
||||||
|
* **-EINVAL** if *th_len* is invalid.
|
||||||
|
*/
|
||||||
|
static __s64 (*bpf_tcp_raw_gen_syncookie_ipv4)(struct iphdr *iph, struct tcphdr *th, __u32 th_len) = (void *) 204;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_tcp_raw_gen_syncookie_ipv6
|
||||||
|
*
|
||||||
|
* Try to issue a SYN cookie for the packet with corresponding
|
||||||
|
* IPv6/TCP headers, *iph* and *th*, without depending on a
|
||||||
|
* listening socket.
|
||||||
|
*
|
||||||
|
* *iph* points to the IPv6 header.
|
||||||
|
*
|
||||||
|
* *th* points to the start of the TCP header, while *th_len*
|
||||||
|
* contains the length of the TCP header (at least
|
||||||
|
* **sizeof**\ (**struct tcphdr**)).
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* On success, lower 32 bits hold the generated SYN cookie in
|
||||||
|
* followed by 16 bits which hold the MSS value for that cookie,
|
||||||
|
* and the top 16 bits are unused.
|
||||||
|
*
|
||||||
|
* On failure, the returned value is one of the following:
|
||||||
|
*
|
||||||
|
* **-EINVAL** if *th_len* is invalid.
|
||||||
|
*
|
||||||
|
* **-EPROTONOSUPPORT** if CONFIG_IPV6 is not builtin.
|
||||||
|
*/
|
||||||
|
static __s64 (*bpf_tcp_raw_gen_syncookie_ipv6)(struct ipv6hdr *iph, struct tcphdr *th, __u32 th_len) = (void *) 205;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_tcp_raw_check_syncookie_ipv4
|
||||||
|
*
|
||||||
|
* Check whether *iph* and *th* contain a valid SYN cookie ACK
|
||||||
|
* without depending on a listening socket.
|
||||||
|
*
|
||||||
|
* *iph* points to the IPv4 header.
|
||||||
|
*
|
||||||
|
* *th* points to the TCP header.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* 0 if *iph* and *th* are a valid SYN cookie ACK.
|
||||||
|
*
|
||||||
|
* On failure, the returned value is one of the following:
|
||||||
|
*
|
||||||
|
* **-EACCES** if the SYN cookie is not valid.
|
||||||
|
*/
|
||||||
|
static long (*bpf_tcp_raw_check_syncookie_ipv4)(struct iphdr *iph, struct tcphdr *th) = (void *) 206;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_tcp_raw_check_syncookie_ipv6
|
||||||
|
*
|
||||||
|
* Check whether *iph* and *th* contain a valid SYN cookie ACK
|
||||||
|
* without depending on a listening socket.
|
||||||
|
*
|
||||||
|
* *iph* points to the IPv6 header.
|
||||||
|
*
|
||||||
|
* *th* points to the TCP header.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* 0 if *iph* and *th* are a valid SYN cookie ACK.
|
||||||
|
*
|
||||||
|
* On failure, the returned value is one of the following:
|
||||||
|
*
|
||||||
|
* **-EACCES** if the SYN cookie is not valid.
|
||||||
|
*
|
||||||
|
* **-EPROTONOSUPPORT** if CONFIG_IPV6 is not builtin.
|
||||||
|
*/
|
||||||
|
static long (*bpf_tcp_raw_check_syncookie_ipv6)(struct ipv6hdr *iph, struct tcphdr *th) = (void *) 207;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_ktime_get_tai_ns
|
||||||
|
*
|
||||||
|
* A nonsettable system-wide clock derived from wall-clock time but
|
||||||
|
* ignoring leap seconds. This clock does not experience
|
||||||
|
* discontinuities and backwards jumps caused by NTP inserting leap
|
||||||
|
* seconds as CLOCK_REALTIME does.
|
||||||
|
*
|
||||||
|
* See: **clock_gettime**\ (**CLOCK_TAI**)
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* Current *ktime*.
|
||||||
|
*/
|
||||||
|
static __u64 (*bpf_ktime_get_tai_ns)(void) = (void *) 208;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_user_ringbuf_drain
|
||||||
|
*
|
||||||
|
* Drain samples from the specified user ring buffer, and invoke
|
||||||
|
* the provided callback for each such sample:
|
||||||
|
*
|
||||||
|
* long (\*callback_fn)(const struct bpf_dynptr \*dynptr, void \*ctx);
|
||||||
|
*
|
||||||
|
* If **callback_fn** returns 0, the helper will continue to try
|
||||||
|
* and drain the next sample, up to a maximum of
|
||||||
|
* BPF_MAX_USER_RINGBUF_SAMPLES samples. If the return value is 1,
|
||||||
|
* the helper will skip the rest of the samples and return. Other
|
||||||
|
* return values are not used now, and will be rejected by the
|
||||||
|
* verifier.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* The number of drained samples if no error was encountered while
|
||||||
|
* draining samples, or 0 if no samples were present in the ring
|
||||||
|
* buffer. If a user-space producer was epoll-waiting on this map,
|
||||||
|
* and at least one sample was drained, they will receive an event
|
||||||
|
* notification notifying them of available space in the ring
|
||||||
|
* buffer. If the BPF_RB_NO_WAKEUP flag is passed to this
|
||||||
|
* function, no wakeup notification will be sent. If the
|
||||||
|
* BPF_RB_FORCE_WAKEUP flag is passed, a wakeup notification will
|
||||||
|
* be sent even if no sample was drained.
|
||||||
|
*
|
||||||
|
* On failure, the returned value is one of the following:
|
||||||
|
*
|
||||||
|
* **-EBUSY** if the ring buffer is contended, and another calling
|
||||||
|
* context was concurrently draining the ring buffer.
|
||||||
|
*
|
||||||
|
* **-EINVAL** if user-space is not properly tracking the ring
|
||||||
|
* buffer due to the producer position not being aligned to 8
|
||||||
|
* bytes, a sample not being aligned to 8 bytes, or the producer
|
||||||
|
* position not matching the advertised length of a sample.
|
||||||
|
*
|
||||||
|
* **-E2BIG** if user-space has tried to publish a sample which is
|
||||||
|
* larger than the size of the ring buffer, or which cannot fit
|
||||||
|
* within a struct bpf_dynptr.
|
||||||
|
*/
|
||||||
|
static long (*bpf_user_ringbuf_drain)(void *map, void *callback_fn, void *ctx, __u64 flags) = (void *) 209;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_cgrp_storage_get
|
||||||
|
*
|
||||||
|
* Get a bpf_local_storage from the *cgroup*.
|
||||||
|
*
|
||||||
|
* Logically, it could be thought of as getting the value from
|
||||||
|
* a *map* with *cgroup* as the **key**. From this
|
||||||
|
* perspective, the usage is not much different from
|
||||||
|
* **bpf_map_lookup_elem**\ (*map*, **&**\ *cgroup*) except this
|
||||||
|
* helper enforces the key must be a cgroup struct and the map must also
|
||||||
|
* be a **BPF_MAP_TYPE_CGRP_STORAGE**.
|
||||||
|
*
|
||||||
|
* In reality, the local-storage value is embedded directly inside of the
|
||||||
|
* *cgroup* object itself, rather than being located in the
|
||||||
|
* **BPF_MAP_TYPE_CGRP_STORAGE** map. When the local-storage value is
|
||||||
|
* queried for some *map* on a *cgroup* object, the kernel will perform an
|
||||||
|
* O(n) iteration over all of the live local-storage values for that
|
||||||
|
* *cgroup* object until the local-storage value for the *map* is found.
|
||||||
|
*
|
||||||
|
* An optional *flags* (**BPF_LOCAL_STORAGE_GET_F_CREATE**) can be
|
||||||
|
* used such that a new bpf_local_storage will be
|
||||||
|
* created if one does not exist. *value* can be used
|
||||||
|
* together with **BPF_LOCAL_STORAGE_GET_F_CREATE** to specify
|
||||||
|
* the initial value of a bpf_local_storage. If *value* is
|
||||||
|
* **NULL**, the new bpf_local_storage will be zero initialized.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* A bpf_local_storage pointer is returned on success.
|
||||||
|
*
|
||||||
|
* **NULL** if not found or there was an error in adding
|
||||||
|
* a new bpf_local_storage.
|
||||||
|
*/
|
||||||
|
static void *(*bpf_cgrp_storage_get)(void *map, struct cgroup *cgroup, void *value, __u64 flags) = (void *) 210;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* bpf_cgrp_storage_delete
|
||||||
|
*
|
||||||
|
* Delete a bpf_local_storage from a *cgroup*.
|
||||||
|
*
|
||||||
|
* Returns
|
||||||
|
* 0 on success.
|
||||||
|
*
|
||||||
|
* **-ENOENT** if the bpf_local_storage cannot be found.
|
||||||
|
*/
|
||||||
|
static long (*bpf_cgrp_storage_delete)(void *map, struct cgroup *cgroup) = (void *) 211;
|
||||||
|
|
||||||
|
|
||||||
|
@ -22,12 +22,25 @@
|
|||||||
* To allow use of SEC() with externs (e.g., for extern .maps declarations),
|
* To allow use of SEC() with externs (e.g., for extern .maps declarations),
|
||||||
* make sure __attribute__((unused)) doesn't trigger compilation warning.
|
* make sure __attribute__((unused)) doesn't trigger compilation warning.
|
||||||
*/
|
*/
|
||||||
|
#if __GNUC__ && !__clang__
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Pragma macros are broken on GCC
|
||||||
|
* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55578
|
||||||
|
* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90400
|
||||||
|
*/
|
||||||
|
#define SEC(name) __attribute__((section(name), used))
|
||||||
|
|
||||||
|
#else
|
||||||
|
|
||||||
#define SEC(name) \
|
#define SEC(name) \
|
||||||
_Pragma("GCC diagnostic push") \
|
_Pragma("GCC diagnostic push") \
|
||||||
_Pragma("GCC diagnostic ignored \"-Wignored-attributes\"") \
|
_Pragma("GCC diagnostic ignored \"-Wignored-attributes\"") \
|
||||||
__attribute__((section(name), used)) \
|
__attribute__((section(name), used)) \
|
||||||
_Pragma("GCC diagnostic pop") \
|
_Pragma("GCC diagnostic pop") \
|
||||||
|
|
||||||
|
#endif
|
||||||
|
|
||||||
/* Avoid 'linux/stddef.h' definition of '__always_inline'. */
|
/* Avoid 'linux/stddef.h' definition of '__always_inline'. */
|
||||||
#undef __always_inline
|
#undef __always_inline
|
||||||
#define __always_inline inline __attribute__((always_inline))
|
#define __always_inline inline __attribute__((always_inline))
|
||||||
@ -75,6 +88,30 @@
|
|||||||
})
|
})
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Compiler (optimization) barrier.
|
||||||
|
*/
|
||||||
|
#ifndef barrier
|
||||||
|
#define barrier() asm volatile("" ::: "memory")
|
||||||
|
#endif
|
||||||
|
|
||||||
|
/* Variable-specific compiler (optimization) barrier. It's a no-op which makes
|
||||||
|
* compiler believe that there is some black box modification of a given
|
||||||
|
* variable and thus prevents compiler from making extra assumption about its
|
||||||
|
* value and potential simplifications and optimizations on this variable.
|
||||||
|
*
|
||||||
|
* E.g., compiler might often delay or even omit 32-bit to 64-bit casting of
|
||||||
|
* a variable, making some code patterns unverifiable. Putting barrier_var()
|
||||||
|
* in place will ensure that cast is performed before the barrier_var()
|
||||||
|
* invocation, because compiler has to pessimistically assume that embedded
|
||||||
|
* asm section might perform some extra operations on that variable.
|
||||||
|
*
|
||||||
|
* This is a variable-specific variant of more global barrier().
|
||||||
|
*/
|
||||||
|
#ifndef barrier_var
|
||||||
|
#define barrier_var(var) asm volatile("" : "=r"(var) : "0"(var))
|
||||||
|
#endif
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Helper macro to throw a compilation error if __bpf_unreachable() gets
|
* Helper macro to throw a compilation error if __bpf_unreachable() gets
|
||||||
* built into the resulting code. This works given BPF back end does not
|
* built into the resulting code. This works given BPF back end does not
|
||||||
@ -123,18 +160,6 @@ bpf_tail_call_static(void *ctx, const void *map, const __u32 slot)
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/*
|
|
||||||
* Helper structure used by eBPF C program
|
|
||||||
* to describe BPF map attributes to libbpf loader
|
|
||||||
*/
|
|
||||||
struct bpf_map_def {
|
|
||||||
unsigned int type;
|
|
||||||
unsigned int key_size;
|
|
||||||
unsigned int value_size;
|
|
||||||
unsigned int max_entries;
|
|
||||||
unsigned int map_flags;
|
|
||||||
} __attribute__((deprecated("use BTF-defined maps in .maps section")));
|
|
||||||
|
|
||||||
enum libbpf_pin_type {
|
enum libbpf_pin_type {
|
||||||
LIBBPF_PIN_NONE,
|
LIBBPF_PIN_NONE,
|
||||||
/* PIN_BY_NAME: pin maps by name (in /sys/fs/bpf by default) */
|
/* PIN_BY_NAME: pin maps by name (in /sys/fs/bpf by default) */
|
||||||
@ -149,6 +174,8 @@ enum libbpf_tristate {
|
|||||||
|
|
||||||
#define __kconfig __attribute__((section(".kconfig")))
|
#define __kconfig __attribute__((section(".kconfig")))
|
||||||
#define __ksym __attribute__((section(".ksyms")))
|
#define __ksym __attribute__((section(".ksyms")))
|
||||||
|
#define __kptr __attribute__((btf_type_tag("kptr")))
|
||||||
|
#define __kptr_ref __attribute__((btf_type_tag("kptr_ref")))
|
||||||
|
|
||||||
#ifndef ___bpf_concat
|
#ifndef ___bpf_concat
|
||||||
#define ___bpf_concat(a, b) a ## b
|
#define ___bpf_concat(a, b) a ## b
|
||||||
|
@ -2,6 +2,8 @@
|
|||||||
#ifndef __BPF_TRACING_H__
|
#ifndef __BPF_TRACING_H__
|
||||||
#define __BPF_TRACING_H__
|
#define __BPF_TRACING_H__
|
||||||
|
|
||||||
|
#include "bpf_helpers.h"
|
||||||
|
|
||||||
/* Scan the ARCH passed in from ARCH env variable (see Makefile) */
|
/* Scan the ARCH passed in from ARCH env variable (see Makefile) */
|
||||||
#if defined(__TARGET_ARCH_x86)
|
#if defined(__TARGET_ARCH_x86)
|
||||||
#define bpf_target_x86
|
#define bpf_target_x86
|
||||||
@ -27,6 +29,9 @@
|
|||||||
#elif defined(__TARGET_ARCH_riscv)
|
#elif defined(__TARGET_ARCH_riscv)
|
||||||
#define bpf_target_riscv
|
#define bpf_target_riscv
|
||||||
#define bpf_target_defined
|
#define bpf_target_defined
|
||||||
|
#elif defined(__TARGET_ARCH_arc)
|
||||||
|
#define bpf_target_arc
|
||||||
|
#define bpf_target_defined
|
||||||
#else
|
#else
|
||||||
|
|
||||||
/* Fall back to what the compiler says */
|
/* Fall back to what the compiler says */
|
||||||
@ -54,6 +59,9 @@
|
|||||||
#elif defined(__riscv) && __riscv_xlen == 64
|
#elif defined(__riscv) && __riscv_xlen == 64
|
||||||
#define bpf_target_riscv
|
#define bpf_target_riscv
|
||||||
#define bpf_target_defined
|
#define bpf_target_defined
|
||||||
|
#elif defined(__arc__)
|
||||||
|
#define bpf_target_arc
|
||||||
|
#define bpf_target_defined
|
||||||
#endif /* no compiler target */
|
#endif /* no compiler target */
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
@ -134,7 +142,7 @@ struct pt_regs___s390 {
|
|||||||
#define __PT_RC_REG gprs[2]
|
#define __PT_RC_REG gprs[2]
|
||||||
#define __PT_SP_REG gprs[15]
|
#define __PT_SP_REG gprs[15]
|
||||||
#define __PT_IP_REG psw.addr
|
#define __PT_IP_REG psw.addr
|
||||||
#define PT_REGS_PARM1_SYSCALL(x) ({ _Pragma("GCC error \"use PT_REGS_PARM1_CORE_SYSCALL() instead\""); 0l; })
|
#define PT_REGS_PARM1_SYSCALL(x) PT_REGS_PARM1_CORE_SYSCALL(x)
|
||||||
#define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___s390 *)(x), orig_gpr2)
|
#define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___s390 *)(x), orig_gpr2)
|
||||||
|
|
||||||
#elif defined(bpf_target_arm)
|
#elif defined(bpf_target_arm)
|
||||||
@ -169,7 +177,7 @@ struct pt_regs___arm64 {
|
|||||||
#define __PT_RC_REG regs[0]
|
#define __PT_RC_REG regs[0]
|
||||||
#define __PT_SP_REG sp
|
#define __PT_SP_REG sp
|
||||||
#define __PT_IP_REG pc
|
#define __PT_IP_REG pc
|
||||||
#define PT_REGS_PARM1_SYSCALL(x) ({ _Pragma("GCC error \"use PT_REGS_PARM1_CORE_SYSCALL() instead\""); 0l; })
|
#define PT_REGS_PARM1_SYSCALL(x) PT_REGS_PARM1_CORE_SYSCALL(x)
|
||||||
#define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___arm64 *)(x), orig_x0)
|
#define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___arm64 *)(x), orig_x0)
|
||||||
|
|
||||||
#elif defined(bpf_target_mips)
|
#elif defined(bpf_target_mips)
|
||||||
@ -229,12 +237,29 @@ struct pt_regs___arm64 {
|
|||||||
#define __PT_PARM6_REG a5
|
#define __PT_PARM6_REG a5
|
||||||
#define __PT_RET_REG ra
|
#define __PT_RET_REG ra
|
||||||
#define __PT_FP_REG s0
|
#define __PT_FP_REG s0
|
||||||
#define __PT_RC_REG a5
|
#define __PT_RC_REG a0
|
||||||
#define __PT_SP_REG sp
|
#define __PT_SP_REG sp
|
||||||
#define __PT_IP_REG pc
|
#define __PT_IP_REG pc
|
||||||
/* riscv does not select ARCH_HAS_SYSCALL_WRAPPER. */
|
/* riscv does not select ARCH_HAS_SYSCALL_WRAPPER. */
|
||||||
#define PT_REGS_SYSCALL_REGS(ctx) ctx
|
#define PT_REGS_SYSCALL_REGS(ctx) ctx
|
||||||
|
|
||||||
|
#elif defined(bpf_target_arc)
|
||||||
|
|
||||||
|
/* arc provides struct user_pt_regs instead of struct pt_regs to userspace */
|
||||||
|
#define __PT_REGS_CAST(x) ((const struct user_regs_struct *)(x))
|
||||||
|
#define __PT_PARM1_REG scratch.r0
|
||||||
|
#define __PT_PARM2_REG scratch.r1
|
||||||
|
#define __PT_PARM3_REG scratch.r2
|
||||||
|
#define __PT_PARM4_REG scratch.r3
|
||||||
|
#define __PT_PARM5_REG scratch.r4
|
||||||
|
#define __PT_RET_REG scratch.blink
|
||||||
|
#define __PT_FP_REG __unsupported__
|
||||||
|
#define __PT_RC_REG scratch.r0
|
||||||
|
#define __PT_SP_REG scratch.sp
|
||||||
|
#define __PT_IP_REG scratch.ret
|
||||||
|
/* arc does not select ARCH_HAS_SYSCALL_WRAPPER. */
|
||||||
|
#define PT_REGS_SYSCALL_REGS(ctx) ctx
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#if defined(bpf_target_defined)
|
#if defined(bpf_target_defined)
|
||||||
@ -405,7 +430,7 @@ struct pt_regs;
|
|||||||
*/
|
*/
|
||||||
#define BPF_PROG(name, args...) \
|
#define BPF_PROG(name, args...) \
|
||||||
name(unsigned long long *ctx); \
|
name(unsigned long long *ctx); \
|
||||||
static __attribute__((always_inline)) typeof(name(0)) \
|
static __always_inline typeof(name(0)) \
|
||||||
____##name(unsigned long long *ctx, ##args); \
|
____##name(unsigned long long *ctx, ##args); \
|
||||||
typeof(name(0)) name(unsigned long long *ctx) \
|
typeof(name(0)) name(unsigned long long *ctx) \
|
||||||
{ \
|
{ \
|
||||||
@ -414,9 +439,116 @@ typeof(name(0)) name(unsigned long long *ctx) \
|
|||||||
return ____##name(___bpf_ctx_cast(args)); \
|
return ____##name(___bpf_ctx_cast(args)); \
|
||||||
_Pragma("GCC diagnostic pop") \
|
_Pragma("GCC diagnostic pop") \
|
||||||
} \
|
} \
|
||||||
static __attribute__((always_inline)) typeof(name(0)) \
|
static __always_inline typeof(name(0)) \
|
||||||
____##name(unsigned long long *ctx, ##args)
|
____##name(unsigned long long *ctx, ##args)
|
||||||
|
|
||||||
|
#ifndef ___bpf_nth2
|
||||||
|
#define ___bpf_nth2(_, _1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, \
|
||||||
|
_14, _15, _16, _17, _18, _19, _20, _21, _22, _23, _24, N, ...) N
|
||||||
|
#endif
|
||||||
|
#ifndef ___bpf_narg2
|
||||||
|
#define ___bpf_narg2(...) \
|
||||||
|
___bpf_nth2(_, ##__VA_ARGS__, 12, 12, 11, 11, 10, 10, 9, 9, 8, 8, 7, 7, \
|
||||||
|
6, 6, 5, 5, 4, 4, 3, 3, 2, 2, 1, 1, 0)
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#define ___bpf_treg_cnt(t) \
|
||||||
|
__builtin_choose_expr(sizeof(t) == 1, 1, \
|
||||||
|
__builtin_choose_expr(sizeof(t) == 2, 1, \
|
||||||
|
__builtin_choose_expr(sizeof(t) == 4, 1, \
|
||||||
|
__builtin_choose_expr(sizeof(t) == 8, 1, \
|
||||||
|
__builtin_choose_expr(sizeof(t) == 16, 2, \
|
||||||
|
(void)0)))))
|
||||||
|
|
||||||
|
#define ___bpf_reg_cnt0() (0)
|
||||||
|
#define ___bpf_reg_cnt1(t, x) (___bpf_reg_cnt0() + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt2(t, x, args...) (___bpf_reg_cnt1(args) + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt3(t, x, args...) (___bpf_reg_cnt2(args) + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt4(t, x, args...) (___bpf_reg_cnt3(args) + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt5(t, x, args...) (___bpf_reg_cnt4(args) + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt6(t, x, args...) (___bpf_reg_cnt5(args) + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt7(t, x, args...) (___bpf_reg_cnt6(args) + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt8(t, x, args...) (___bpf_reg_cnt7(args) + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt9(t, x, args...) (___bpf_reg_cnt8(args) + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt10(t, x, args...) (___bpf_reg_cnt9(args) + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt11(t, x, args...) (___bpf_reg_cnt10(args) + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt12(t, x, args...) (___bpf_reg_cnt11(args) + ___bpf_treg_cnt(t))
|
||||||
|
#define ___bpf_reg_cnt(args...) ___bpf_apply(___bpf_reg_cnt, ___bpf_narg2(args))(args)
|
||||||
|
|
||||||
|
#define ___bpf_union_arg(t, x, n) \
|
||||||
|
__builtin_choose_expr(sizeof(t) == 1, ({ union { __u8 z[1]; t x; } ___t = { .z = {ctx[n]}}; ___t.x; }), \
|
||||||
|
__builtin_choose_expr(sizeof(t) == 2, ({ union { __u16 z[1]; t x; } ___t = { .z = {ctx[n]} }; ___t.x; }), \
|
||||||
|
__builtin_choose_expr(sizeof(t) == 4, ({ union { __u32 z[1]; t x; } ___t = { .z = {ctx[n]} }; ___t.x; }), \
|
||||||
|
__builtin_choose_expr(sizeof(t) == 8, ({ union { __u64 z[1]; t x; } ___t = {.z = {ctx[n]} }; ___t.x; }), \
|
||||||
|
__builtin_choose_expr(sizeof(t) == 16, ({ union { __u64 z[2]; t x; } ___t = {.z = {ctx[n], ctx[n + 1]} }; ___t.x; }), \
|
||||||
|
(void)0)))))
|
||||||
|
|
||||||
|
#define ___bpf_ctx_arg0(n, args...)
|
||||||
|
#define ___bpf_ctx_arg1(n, t, x) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt1(t, x))
|
||||||
|
#define ___bpf_ctx_arg2(n, t, x, args...) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt2(t, x, args)) ___bpf_ctx_arg1(n, args)
|
||||||
|
#define ___bpf_ctx_arg3(n, t, x, args...) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt3(t, x, args)) ___bpf_ctx_arg2(n, args)
|
||||||
|
#define ___bpf_ctx_arg4(n, t, x, args...) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt4(t, x, args)) ___bpf_ctx_arg3(n, args)
|
||||||
|
#define ___bpf_ctx_arg5(n, t, x, args...) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt5(t, x, args)) ___bpf_ctx_arg4(n, args)
|
||||||
|
#define ___bpf_ctx_arg6(n, t, x, args...) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt6(t, x, args)) ___bpf_ctx_arg5(n, args)
|
||||||
|
#define ___bpf_ctx_arg7(n, t, x, args...) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt7(t, x, args)) ___bpf_ctx_arg6(n, args)
|
||||||
|
#define ___bpf_ctx_arg8(n, t, x, args...) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt8(t, x, args)) ___bpf_ctx_arg7(n, args)
|
||||||
|
#define ___bpf_ctx_arg9(n, t, x, args...) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt9(t, x, args)) ___bpf_ctx_arg8(n, args)
|
||||||
|
#define ___bpf_ctx_arg10(n, t, x, args...) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt10(t, x, args)) ___bpf_ctx_arg9(n, args)
|
||||||
|
#define ___bpf_ctx_arg11(n, t, x, args...) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt11(t, x, args)) ___bpf_ctx_arg10(n, args)
|
||||||
|
#define ___bpf_ctx_arg12(n, t, x, args...) , ___bpf_union_arg(t, x, n - ___bpf_reg_cnt12(t, x, args)) ___bpf_ctx_arg11(n, args)
|
||||||
|
#define ___bpf_ctx_arg(args...) ___bpf_apply(___bpf_ctx_arg, ___bpf_narg2(args))(___bpf_reg_cnt(args), args)
|
||||||
|
|
||||||
|
#define ___bpf_ctx_decl0()
|
||||||
|
#define ___bpf_ctx_decl1(t, x) , t x
|
||||||
|
#define ___bpf_ctx_decl2(t, x, args...) , t x ___bpf_ctx_decl1(args)
|
||||||
|
#define ___bpf_ctx_decl3(t, x, args...) , t x ___bpf_ctx_decl2(args)
|
||||||
|
#define ___bpf_ctx_decl4(t, x, args...) , t x ___bpf_ctx_decl3(args)
|
||||||
|
#define ___bpf_ctx_decl5(t, x, args...) , t x ___bpf_ctx_decl4(args)
|
||||||
|
#define ___bpf_ctx_decl6(t, x, args...) , t x ___bpf_ctx_decl5(args)
|
||||||
|
#define ___bpf_ctx_decl7(t, x, args...) , t x ___bpf_ctx_decl6(args)
|
||||||
|
#define ___bpf_ctx_decl8(t, x, args...) , t x ___bpf_ctx_decl7(args)
|
||||||
|
#define ___bpf_ctx_decl9(t, x, args...) , t x ___bpf_ctx_decl8(args)
|
||||||
|
#define ___bpf_ctx_decl10(t, x, args...) , t x ___bpf_ctx_decl9(args)
|
||||||
|
#define ___bpf_ctx_decl11(t, x, args...) , t x ___bpf_ctx_decl10(args)
|
||||||
|
#define ___bpf_ctx_decl12(t, x, args...) , t x ___bpf_ctx_decl11(args)
|
||||||
|
#define ___bpf_ctx_decl(args...) ___bpf_apply(___bpf_ctx_decl, ___bpf_narg2(args))(args)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* BPF_PROG2 is an enhanced version of BPF_PROG in order to handle struct
|
||||||
|
* arguments. Since each struct argument might take one or two u64 values
|
||||||
|
* in the trampoline stack, argument type size is needed to place proper number
|
||||||
|
* of u64 values for each argument. Therefore, BPF_PROG2 has different
|
||||||
|
* syntax from BPF_PROG. For example, for the following BPF_PROG syntax:
|
||||||
|
*
|
||||||
|
* int BPF_PROG(test2, int a, int b) { ... }
|
||||||
|
*
|
||||||
|
* the corresponding BPF_PROG2 syntax is:
|
||||||
|
*
|
||||||
|
* int BPF_PROG2(test2, int, a, int, b) { ... }
|
||||||
|
*
|
||||||
|
* where type and the corresponding argument name are separated by comma.
|
||||||
|
*
|
||||||
|
* Use BPF_PROG2 macro if one of the arguments might be a struct/union larger
|
||||||
|
* than 8 bytes:
|
||||||
|
*
|
||||||
|
* int BPF_PROG2(test_struct_arg, struct bpf_testmod_struct_arg_1, a, int, b,
|
||||||
|
* int, c, int, d, struct bpf_testmod_struct_arg_2, e, int, ret)
|
||||||
|
* {
|
||||||
|
* // access a, b, c, d, e, and ret directly
|
||||||
|
* ...
|
||||||
|
* }
|
||||||
|
*/
|
||||||
|
#define BPF_PROG2(name, args...) \
|
||||||
|
name(unsigned long long *ctx); \
|
||||||
|
static __always_inline typeof(name(0)) \
|
||||||
|
____##name(unsigned long long *ctx ___bpf_ctx_decl(args)); \
|
||||||
|
typeof(name(0)) name(unsigned long long *ctx) \
|
||||||
|
{ \
|
||||||
|
return ____##name(ctx ___bpf_ctx_arg(args)); \
|
||||||
|
} \
|
||||||
|
static __always_inline typeof(name(0)) \
|
||||||
|
____##name(unsigned long long *ctx ___bpf_ctx_decl(args))
|
||||||
|
|
||||||
struct pt_regs;
|
struct pt_regs;
|
||||||
|
|
||||||
#define ___bpf_kprobe_args0() ctx
|
#define ___bpf_kprobe_args0() ctx
|
||||||
@ -440,7 +572,7 @@ struct pt_regs;
|
|||||||
*/
|
*/
|
||||||
#define BPF_KPROBE(name, args...) \
|
#define BPF_KPROBE(name, args...) \
|
||||||
name(struct pt_regs *ctx); \
|
name(struct pt_regs *ctx); \
|
||||||
static __attribute__((always_inline)) typeof(name(0)) \
|
static __always_inline typeof(name(0)) \
|
||||||
____##name(struct pt_regs *ctx, ##args); \
|
____##name(struct pt_regs *ctx, ##args); \
|
||||||
typeof(name(0)) name(struct pt_regs *ctx) \
|
typeof(name(0)) name(struct pt_regs *ctx) \
|
||||||
{ \
|
{ \
|
||||||
@ -449,7 +581,7 @@ typeof(name(0)) name(struct pt_regs *ctx) \
|
|||||||
return ____##name(___bpf_kprobe_args(args)); \
|
return ____##name(___bpf_kprobe_args(args)); \
|
||||||
_Pragma("GCC diagnostic pop") \
|
_Pragma("GCC diagnostic pop") \
|
||||||
} \
|
} \
|
||||||
static __attribute__((always_inline)) typeof(name(0)) \
|
static __always_inline typeof(name(0)) \
|
||||||
____##name(struct pt_regs *ctx, ##args)
|
____##name(struct pt_regs *ctx, ##args)
|
||||||
|
|
||||||
#define ___bpf_kretprobe_args0() ctx
|
#define ___bpf_kretprobe_args0() ctx
|
||||||
@ -464,7 +596,7 @@ ____##name(struct pt_regs *ctx, ##args)
|
|||||||
*/
|
*/
|
||||||
#define BPF_KRETPROBE(name, args...) \
|
#define BPF_KRETPROBE(name, args...) \
|
||||||
name(struct pt_regs *ctx); \
|
name(struct pt_regs *ctx); \
|
||||||
static __attribute__((always_inline)) typeof(name(0)) \
|
static __always_inline typeof(name(0)) \
|
||||||
____##name(struct pt_regs *ctx, ##args); \
|
____##name(struct pt_regs *ctx, ##args); \
|
||||||
typeof(name(0)) name(struct pt_regs *ctx) \
|
typeof(name(0)) name(struct pt_regs *ctx) \
|
||||||
{ \
|
{ \
|
||||||
@ -475,39 +607,69 @@ typeof(name(0)) name(struct pt_regs *ctx) \
|
|||||||
} \
|
} \
|
||||||
static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args)
|
static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args)
|
||||||
|
|
||||||
|
/* If kernel has CONFIG_ARCH_HAS_SYSCALL_WRAPPER, read pt_regs directly */
|
||||||
#define ___bpf_syscall_args0() ctx
|
#define ___bpf_syscall_args0() ctx
|
||||||
#define ___bpf_syscall_args1(x) ___bpf_syscall_args0(), (void *)PT_REGS_PARM1_CORE_SYSCALL(regs)
|
#define ___bpf_syscall_args1(x) ___bpf_syscall_args0(), (void *)PT_REGS_PARM1_SYSCALL(regs)
|
||||||
#define ___bpf_syscall_args2(x, args...) ___bpf_syscall_args1(args), (void *)PT_REGS_PARM2_CORE_SYSCALL(regs)
|
#define ___bpf_syscall_args2(x, args...) ___bpf_syscall_args1(args), (void *)PT_REGS_PARM2_SYSCALL(regs)
|
||||||
#define ___bpf_syscall_args3(x, args...) ___bpf_syscall_args2(args), (void *)PT_REGS_PARM3_CORE_SYSCALL(regs)
|
#define ___bpf_syscall_args3(x, args...) ___bpf_syscall_args2(args), (void *)PT_REGS_PARM3_SYSCALL(regs)
|
||||||
#define ___bpf_syscall_args4(x, args...) ___bpf_syscall_args3(args), (void *)PT_REGS_PARM4_CORE_SYSCALL(regs)
|
#define ___bpf_syscall_args4(x, args...) ___bpf_syscall_args3(args), (void *)PT_REGS_PARM4_SYSCALL(regs)
|
||||||
#define ___bpf_syscall_args5(x, args...) ___bpf_syscall_args4(args), (void *)PT_REGS_PARM5_CORE_SYSCALL(regs)
|
#define ___bpf_syscall_args5(x, args...) ___bpf_syscall_args4(args), (void *)PT_REGS_PARM5_SYSCALL(regs)
|
||||||
#define ___bpf_syscall_args(args...) ___bpf_apply(___bpf_syscall_args, ___bpf_narg(args))(args)
|
#define ___bpf_syscall_args(args...) ___bpf_apply(___bpf_syscall_args, ___bpf_narg(args))(args)
|
||||||
|
|
||||||
|
/* If kernel doesn't have CONFIG_ARCH_HAS_SYSCALL_WRAPPER, we have to BPF_CORE_READ from pt_regs */
|
||||||
|
#define ___bpf_syswrap_args0() ctx
|
||||||
|
#define ___bpf_syswrap_args1(x) ___bpf_syswrap_args0(), (void *)PT_REGS_PARM1_CORE_SYSCALL(regs)
|
||||||
|
#define ___bpf_syswrap_args2(x, args...) ___bpf_syswrap_args1(args), (void *)PT_REGS_PARM2_CORE_SYSCALL(regs)
|
||||||
|
#define ___bpf_syswrap_args3(x, args...) ___bpf_syswrap_args2(args), (void *)PT_REGS_PARM3_CORE_SYSCALL(regs)
|
||||||
|
#define ___bpf_syswrap_args4(x, args...) ___bpf_syswrap_args3(args), (void *)PT_REGS_PARM4_CORE_SYSCALL(regs)
|
||||||
|
#define ___bpf_syswrap_args5(x, args...) ___bpf_syswrap_args4(args), (void *)PT_REGS_PARM5_CORE_SYSCALL(regs)
|
||||||
|
#define ___bpf_syswrap_args(args...) ___bpf_apply(___bpf_syswrap_args, ___bpf_narg(args))(args)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* BPF_KPROBE_SYSCALL is a variant of BPF_KPROBE, which is intended for
|
* BPF_KSYSCALL is a variant of BPF_KPROBE, which is intended for
|
||||||
* tracing syscall functions, like __x64_sys_close. It hides the underlying
|
* tracing syscall functions, like __x64_sys_close. It hides the underlying
|
||||||
* platform-specific low-level way of getting syscall input arguments from
|
* platform-specific low-level way of getting syscall input arguments from
|
||||||
* struct pt_regs, and provides a familiar typed and named function arguments
|
* struct pt_regs, and provides a familiar typed and named function arguments
|
||||||
* syntax and semantics of accessing syscall input parameters.
|
* syntax and semantics of accessing syscall input parameters.
|
||||||
*
|
*
|
||||||
* Original struct pt_regs* context is preserved as 'ctx' argument. This might
|
* Original struct pt_regs * context is preserved as 'ctx' argument. This might
|
||||||
* be necessary when using BPF helpers like bpf_perf_event_output().
|
* be necessary when using BPF helpers like bpf_perf_event_output().
|
||||||
*
|
*
|
||||||
* This macro relies on BPF CO-RE support.
|
* At the moment BPF_KSYSCALL does not transparently handle all the calling
|
||||||
|
* convention quirks for the following syscalls:
|
||||||
|
*
|
||||||
|
* - mmap(): __ARCH_WANT_SYS_OLD_MMAP.
|
||||||
|
* - clone(): CONFIG_CLONE_BACKWARDS, CONFIG_CLONE_BACKWARDS2 and
|
||||||
|
* CONFIG_CLONE_BACKWARDS3.
|
||||||
|
* - socket-related syscalls: __ARCH_WANT_SYS_SOCKETCALL.
|
||||||
|
* - compat syscalls.
|
||||||
|
*
|
||||||
|
* This may or may not change in the future. User needs to take extra measures
|
||||||
|
* to handle such quirks explicitly, if necessary.
|
||||||
|
*
|
||||||
|
* This macro relies on BPF CO-RE support and virtual __kconfig externs.
|
||||||
*/
|
*/
|
||||||
#define BPF_KPROBE_SYSCALL(name, args...) \
|
#define BPF_KSYSCALL(name, args...) \
|
||||||
name(struct pt_regs *ctx); \
|
name(struct pt_regs *ctx); \
|
||||||
static __attribute__((always_inline)) typeof(name(0)) \
|
extern _Bool LINUX_HAS_SYSCALL_WRAPPER __kconfig; \
|
||||||
|
static __always_inline typeof(name(0)) \
|
||||||
____##name(struct pt_regs *ctx, ##args); \
|
____##name(struct pt_regs *ctx, ##args); \
|
||||||
typeof(name(0)) name(struct pt_regs *ctx) \
|
typeof(name(0)) name(struct pt_regs *ctx) \
|
||||||
{ \
|
{ \
|
||||||
struct pt_regs *regs = PT_REGS_SYSCALL_REGS(ctx); \
|
struct pt_regs *regs = LINUX_HAS_SYSCALL_WRAPPER \
|
||||||
|
? (struct pt_regs *)PT_REGS_PARM1(ctx) \
|
||||||
|
: ctx; \
|
||||||
_Pragma("GCC diagnostic push") \
|
_Pragma("GCC diagnostic push") \
|
||||||
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
|
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
|
||||||
return ____##name(___bpf_syscall_args(args)); \
|
if (LINUX_HAS_SYSCALL_WRAPPER) \
|
||||||
|
return ____##name(___bpf_syswrap_args(args)); \
|
||||||
|
else \
|
||||||
|
return ____##name(___bpf_syscall_args(args)); \
|
||||||
_Pragma("GCC diagnostic pop") \
|
_Pragma("GCC diagnostic pop") \
|
||||||
} \
|
} \
|
||||||
static __attribute__((always_inline)) typeof(name(0)) \
|
static __always_inline typeof(name(0)) \
|
||||||
____##name(struct pt_regs *ctx, ##args)
|
____##name(struct pt_regs *ctx, ##args)
|
||||||
|
|
||||||
|
#define BPF_KPROBE_SYSCALL BPF_KSYSCALL
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
142
src/btf.h
@ -116,24 +116,15 @@ LIBBPF_API struct btf *btf__parse_raw_split(const char *path, struct btf *base_b
|
|||||||
|
|
||||||
LIBBPF_API struct btf *btf__load_vmlinux_btf(void);
|
LIBBPF_API struct btf *btf__load_vmlinux_btf(void);
|
||||||
LIBBPF_API struct btf *btf__load_module_btf(const char *module_name, struct btf *vmlinux_btf);
|
LIBBPF_API struct btf *btf__load_module_btf(const char *module_name, struct btf *vmlinux_btf);
|
||||||
LIBBPF_API struct btf *libbpf_find_kernel_btf(void);
|
|
||||||
|
|
||||||
LIBBPF_API struct btf *btf__load_from_kernel_by_id(__u32 id);
|
LIBBPF_API struct btf *btf__load_from_kernel_by_id(__u32 id);
|
||||||
LIBBPF_API struct btf *btf__load_from_kernel_by_id_split(__u32 id, struct btf *base_btf);
|
LIBBPF_API struct btf *btf__load_from_kernel_by_id_split(__u32 id, struct btf *base_btf);
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 6, "use btf__load_from_kernel_by_id instead")
|
|
||||||
LIBBPF_API int btf__get_from_id(__u32 id, struct btf **btf);
|
|
||||||
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 6, "intended for internal libbpf use only")
|
|
||||||
LIBBPF_API int btf__finalize_data(struct bpf_object *obj, struct btf *btf);
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 6, "use btf__load_into_kernel instead")
|
|
||||||
LIBBPF_API int btf__load(struct btf *btf);
|
|
||||||
LIBBPF_API int btf__load_into_kernel(struct btf *btf);
|
LIBBPF_API int btf__load_into_kernel(struct btf *btf);
|
||||||
LIBBPF_API __s32 btf__find_by_name(const struct btf *btf,
|
LIBBPF_API __s32 btf__find_by_name(const struct btf *btf,
|
||||||
const char *type_name);
|
const char *type_name);
|
||||||
LIBBPF_API __s32 btf__find_by_name_kind(const struct btf *btf,
|
LIBBPF_API __s32 btf__find_by_name_kind(const struct btf *btf,
|
||||||
const char *type_name, __u32 kind);
|
const char *type_name, __u32 kind);
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use btf__type_cnt() instead; note that btf__get_nr_types() == btf__type_cnt() - 1")
|
|
||||||
LIBBPF_API __u32 btf__get_nr_types(const struct btf *btf);
|
|
||||||
LIBBPF_API __u32 btf__type_cnt(const struct btf *btf);
|
LIBBPF_API __u32 btf__type_cnt(const struct btf *btf);
|
||||||
LIBBPF_API const struct btf *btf__base_btf(const struct btf *btf);
|
LIBBPF_API const struct btf *btf__base_btf(const struct btf *btf);
|
||||||
LIBBPF_API const struct btf_type *btf__type_by_id(const struct btf *btf,
|
LIBBPF_API const struct btf_type *btf__type_by_id(const struct btf *btf,
|
||||||
@ -150,29 +141,10 @@ LIBBPF_API void btf__set_fd(struct btf *btf, int fd);
|
|||||||
LIBBPF_API const void *btf__raw_data(const struct btf *btf, __u32 *size);
|
LIBBPF_API const void *btf__raw_data(const struct btf *btf, __u32 *size);
|
||||||
LIBBPF_API const char *btf__name_by_offset(const struct btf *btf, __u32 offset);
|
LIBBPF_API const char *btf__name_by_offset(const struct btf *btf, __u32 offset);
|
||||||
LIBBPF_API const char *btf__str_by_offset(const struct btf *btf, __u32 offset);
|
LIBBPF_API const char *btf__str_by_offset(const struct btf *btf, __u32 offset);
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "this API is not necessary when BTF-defined maps are used")
|
|
||||||
LIBBPF_API int btf__get_map_kv_tids(const struct btf *btf, const char *map_name,
|
|
||||||
__u32 expected_key_size,
|
|
||||||
__u32 expected_value_size,
|
|
||||||
__u32 *key_type_id, __u32 *value_type_id);
|
|
||||||
|
|
||||||
LIBBPF_API struct btf_ext *btf_ext__new(const __u8 *data, __u32 size);
|
LIBBPF_API struct btf_ext *btf_ext__new(const __u8 *data, __u32 size);
|
||||||
LIBBPF_API void btf_ext__free(struct btf_ext *btf_ext);
|
LIBBPF_API void btf_ext__free(struct btf_ext *btf_ext);
|
||||||
LIBBPF_API const void *btf_ext__raw_data(const struct btf_ext *btf_ext, __u32 *size);
|
LIBBPF_API const void *btf_ext__raw_data(const struct btf_ext *btf_ext, __u32 *size);
|
||||||
LIBBPF_API LIBBPF_DEPRECATED("btf_ext__reloc_func_info was never meant as a public API and has wrong assumptions embedded in it; it will be removed in the future libbpf versions")
|
|
||||||
int btf_ext__reloc_func_info(const struct btf *btf,
|
|
||||||
const struct btf_ext *btf_ext,
|
|
||||||
const char *sec_name, __u32 insns_cnt,
|
|
||||||
void **func_info, __u32 *cnt);
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED("btf_ext__reloc_line_info was never meant as a public API and has wrong assumptions embedded in it; it will be removed in the future libbpf versions")
|
|
||||||
int btf_ext__reloc_line_info(const struct btf *btf,
|
|
||||||
const struct btf_ext *btf_ext,
|
|
||||||
const char *sec_name, __u32 insns_cnt,
|
|
||||||
void **line_info, __u32 *cnt);
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED("btf_ext__reloc_func_info is deprecated; write custom func_info parsing to fetch rec_size")
|
|
||||||
__u32 btf_ext__func_info_rec_size(const struct btf_ext *btf_ext);
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED("btf_ext__reloc_line_info is deprecated; write custom line_info parsing to fetch rec_size")
|
|
||||||
__u32 btf_ext__line_info_rec_size(const struct btf_ext *btf_ext);
|
|
||||||
|
|
||||||
LIBBPF_API int btf__find_str(struct btf *btf, const char *s);
|
LIBBPF_API int btf__find_str(struct btf *btf, const char *s);
|
||||||
LIBBPF_API int btf__add_str(struct btf *btf, const char *s);
|
LIBBPF_API int btf__add_str(struct btf *btf, const char *s);
|
||||||
@ -215,6 +187,8 @@ LIBBPF_API int btf__add_field(struct btf *btf, const char *name, int field_type_
|
|||||||
/* enum construction APIs */
|
/* enum construction APIs */
|
||||||
LIBBPF_API int btf__add_enum(struct btf *btf, const char *name, __u32 bytes_sz);
|
LIBBPF_API int btf__add_enum(struct btf *btf, const char *name, __u32 bytes_sz);
|
||||||
LIBBPF_API int btf__add_enum_value(struct btf *btf, const char *name, __s64 value);
|
LIBBPF_API int btf__add_enum_value(struct btf *btf, const char *name, __s64 value);
|
||||||
|
LIBBPF_API int btf__add_enum64(struct btf *btf, const char *name, __u32 bytes_sz, bool is_signed);
|
||||||
|
LIBBPF_API int btf__add_enum64_value(struct btf *btf, const char *name, __u64 value);
|
||||||
|
|
||||||
enum btf_fwd_kind {
|
enum btf_fwd_kind {
|
||||||
BTF_FWD_STRUCT = 0,
|
BTF_FWD_STRUCT = 0,
|
||||||
@ -257,22 +231,12 @@ struct btf_dedup_opts {
|
|||||||
|
|
||||||
LIBBPF_API int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts);
|
LIBBPF_API int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts);
|
||||||
|
|
||||||
LIBBPF_API int btf__dedup_v0_6_0(struct btf *btf, const struct btf_dedup_opts *opts);
|
|
||||||
|
|
||||||
LIBBPF_DEPRECATED_SINCE(0, 7, "use btf__dedup() instead")
|
|
||||||
LIBBPF_API int btf__dedup_deprecated(struct btf *btf, struct btf_ext *btf_ext, const void *opts);
|
|
||||||
#define btf__dedup(...) ___libbpf_overload(___btf_dedup, __VA_ARGS__)
|
|
||||||
#define ___btf_dedup3(btf, btf_ext, opts) btf__dedup_deprecated(btf, btf_ext, opts)
|
|
||||||
#define ___btf_dedup2(btf, opts) btf__dedup(btf, opts)
|
|
||||||
|
|
||||||
struct btf_dump;
|
struct btf_dump;
|
||||||
|
|
||||||
struct btf_dump_opts {
|
struct btf_dump_opts {
|
||||||
union {
|
size_t sz;
|
||||||
size_t sz;
|
|
||||||
void *ctx; /* DEPRECATED: will be gone in v1.0 */
|
|
||||||
};
|
|
||||||
};
|
};
|
||||||
|
#define btf_dump_opts__last_field sz
|
||||||
|
|
||||||
typedef void (*btf_dump_printf_fn_t)(void *ctx, const char *fmt, va_list args);
|
typedef void (*btf_dump_printf_fn_t)(void *ctx, const char *fmt, va_list args);
|
||||||
|
|
||||||
@ -281,51 +245,6 @@ LIBBPF_API struct btf_dump *btf_dump__new(const struct btf *btf,
|
|||||||
void *ctx,
|
void *ctx,
|
||||||
const struct btf_dump_opts *opts);
|
const struct btf_dump_opts *opts);
|
||||||
|
|
||||||
LIBBPF_API struct btf_dump *btf_dump__new_v0_6_0(const struct btf *btf,
|
|
||||||
btf_dump_printf_fn_t printf_fn,
|
|
||||||
void *ctx,
|
|
||||||
const struct btf_dump_opts *opts);
|
|
||||||
|
|
||||||
LIBBPF_API struct btf_dump *btf_dump__new_deprecated(const struct btf *btf,
|
|
||||||
const struct btf_ext *btf_ext,
|
|
||||||
const struct btf_dump_opts *opts,
|
|
||||||
btf_dump_printf_fn_t printf_fn);
|
|
||||||
|
|
||||||
/* Choose either btf_dump__new() or btf_dump__new_deprecated() based on the
|
|
||||||
* type of 4th argument. If it's btf_dump's print callback, use deprecated
|
|
||||||
* API; otherwise, choose the new btf_dump__new(). ___libbpf_override()
|
|
||||||
* doesn't work here because both variants have 4 input arguments.
|
|
||||||
*
|
|
||||||
* (void *) casts are necessary to avoid compilation warnings about type
|
|
||||||
* mismatches, because even though __builtin_choose_expr() only ever evaluates
|
|
||||||
* one side the other side still has to satisfy type constraints (this is
|
|
||||||
* compiler implementation limitation which might be lifted eventually,
|
|
||||||
* according to the documentation). So passing struct btf_ext in place of
|
|
||||||
* btf_dump_printf_fn_t would be generating compilation warning. Casting to
|
|
||||||
* void * avoids this issue.
|
|
||||||
*
|
|
||||||
* Also, two type compatibility checks for a function and function pointer are
|
|
||||||
* required because passing function reference into btf_dump__new() as
|
|
||||||
* btf_dump__new(..., my_callback, ...) and as btf_dump__new(...,
|
|
||||||
* &my_callback, ...) (not explicit ampersand in the latter case) actually
|
|
||||||
* differs as far as __builtin_types_compatible_p() is concerned. Thus two
|
|
||||||
* checks are combined to detect callback argument.
|
|
||||||
*
|
|
||||||
* The rest works just like in case of ___libbpf_override() usage with symbol
|
|
||||||
* versioning.
|
|
||||||
*
|
|
||||||
* C++ compilers don't support __builtin_types_compatible_p(), so at least
|
|
||||||
* don't screw up compilation for them and let C++ users pick btf_dump__new
|
|
||||||
* vs btf_dump__new_deprecated explicitly.
|
|
||||||
*/
|
|
||||||
#ifndef __cplusplus
|
|
||||||
#define btf_dump__new(a1, a2, a3, a4) __builtin_choose_expr( \
|
|
||||||
__builtin_types_compatible_p(typeof(a4), btf_dump_printf_fn_t) || \
|
|
||||||
__builtin_types_compatible_p(typeof(a4), void(void *, const char *, va_list)), \
|
|
||||||
btf_dump__new_deprecated((void *)a1, (void *)a2, (void *)a3, (void *)a4), \
|
|
||||||
btf_dump__new((void *)a1, (void *)a2, (void *)a3, (void *)a4))
|
|
||||||
#endif
|
|
||||||
|
|
||||||
LIBBPF_API void btf_dump__free(struct btf_dump *d);
|
LIBBPF_API void btf_dump__free(struct btf_dump *d);
|
||||||
|
|
||||||
LIBBPF_API int btf_dump__dump_type(struct btf_dump *d, __u32 id);
|
LIBBPF_API int btf_dump__dump_type(struct btf_dump *d, __u32 id);
|
||||||
@ -393,9 +312,10 @@ btf_dump__dump_type_data(struct btf_dump *d, __u32 id,
|
|||||||
#ifndef BTF_KIND_FLOAT
|
#ifndef BTF_KIND_FLOAT
|
||||||
#define BTF_KIND_FLOAT 16 /* Floating point */
|
#define BTF_KIND_FLOAT 16 /* Floating point */
|
||||||
#endif
|
#endif
|
||||||
/* The kernel header switched to enums, so these two were never #defined */
|
/* The kernel header switched to enums, so the following were never #defined */
|
||||||
#define BTF_KIND_DECL_TAG 17 /* Decl Tag */
|
#define BTF_KIND_DECL_TAG 17 /* Decl Tag */
|
||||||
#define BTF_KIND_TYPE_TAG 18 /* Type Tag */
|
#define BTF_KIND_TYPE_TAG 18 /* Type Tag */
|
||||||
|
#define BTF_KIND_ENUM64 19 /* Enum for up-to 64bit values */
|
||||||
|
|
||||||
static inline __u16 btf_kind(const struct btf_type *t)
|
static inline __u16 btf_kind(const struct btf_type *t)
|
||||||
{
|
{
|
||||||
@ -454,6 +374,11 @@ static inline bool btf_is_enum(const struct btf_type *t)
|
|||||||
return btf_kind(t) == BTF_KIND_ENUM;
|
return btf_kind(t) == BTF_KIND_ENUM;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline bool btf_is_enum64(const struct btf_type *t)
|
||||||
|
{
|
||||||
|
return btf_kind(t) == BTF_KIND_ENUM64;
|
||||||
|
}
|
||||||
|
|
||||||
static inline bool btf_is_fwd(const struct btf_type *t)
|
static inline bool btf_is_fwd(const struct btf_type *t)
|
||||||
{
|
{
|
||||||
return btf_kind(t) == BTF_KIND_FWD;
|
return btf_kind(t) == BTF_KIND_FWD;
|
||||||
@ -524,6 +449,18 @@ static inline bool btf_is_type_tag(const struct btf_type *t)
|
|||||||
return btf_kind(t) == BTF_KIND_TYPE_TAG;
|
return btf_kind(t) == BTF_KIND_TYPE_TAG;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline bool btf_is_any_enum(const struct btf_type *t)
|
||||||
|
{
|
||||||
|
return btf_is_enum(t) || btf_is_enum64(t);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline bool btf_kind_core_compat(const struct btf_type *t1,
|
||||||
|
const struct btf_type *t2)
|
||||||
|
{
|
||||||
|
return btf_kind(t1) == btf_kind(t2) ||
|
||||||
|
(btf_is_any_enum(t1) && btf_is_any_enum(t2));
|
||||||
|
}
|
||||||
|
|
||||||
static inline __u8 btf_int_encoding(const struct btf_type *t)
|
static inline __u8 btf_int_encoding(const struct btf_type *t)
|
||||||
{
|
{
|
||||||
return BTF_INT_ENCODING(*(__u32 *)(t + 1));
|
return BTF_INT_ENCODING(*(__u32 *)(t + 1));
|
||||||
@ -549,6 +486,39 @@ static inline struct btf_enum *btf_enum(const struct btf_type *t)
|
|||||||
return (struct btf_enum *)(t + 1);
|
return (struct btf_enum *)(t + 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
struct btf_enum64;
|
||||||
|
|
||||||
|
static inline struct btf_enum64 *btf_enum64(const struct btf_type *t)
|
||||||
|
{
|
||||||
|
return (struct btf_enum64 *)(t + 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline __u64 btf_enum64_value(const struct btf_enum64 *e)
|
||||||
|
{
|
||||||
|
/* struct btf_enum64 is introduced in Linux 6.0, which is very
|
||||||
|
* bleeding-edge. Here we are avoiding relying on struct btf_enum64
|
||||||
|
* definition coming from kernel UAPI headers to support wider range
|
||||||
|
* of system-wide kernel headers.
|
||||||
|
*
|
||||||
|
* Given this header can be also included from C++ applications, that
|
||||||
|
* further restricts C tricks we can use (like using compatible
|
||||||
|
* anonymous struct). So just treat struct btf_enum64 as
|
||||||
|
* a three-element array of u32 and access second (lo32) and third
|
||||||
|
* (hi32) elements directly.
|
||||||
|
*
|
||||||
|
* For reference, here is a struct btf_enum64 definition:
|
||||||
|
*
|
||||||
|
* const struct btf_enum64 {
|
||||||
|
* __u32 name_off;
|
||||||
|
* __u32 val_lo32;
|
||||||
|
* __u32 val_hi32;
|
||||||
|
* };
|
||||||
|
*/
|
||||||
|
const __u32 *e64 = (const __u32 *)e;
|
||||||
|
|
||||||
|
return ((__u64)e64[2] << 32) | e64[1];
|
||||||
|
}
|
||||||
|
|
||||||
static inline struct btf_member *btf_members(const struct btf_type *t)
|
static inline struct btf_member *btf_members(const struct btf_type *t)
|
||||||
{
|
{
|
||||||
return (struct btf_member *)(t + 1);
|
return (struct btf_member *)(t + 1);
|
||||||
|
402
src/btf_dump.c
@ -13,6 +13,7 @@
|
|||||||
#include <ctype.h>
|
#include <ctype.h>
|
||||||
#include <endian.h>
|
#include <endian.h>
|
||||||
#include <errno.h>
|
#include <errno.h>
|
||||||
|
#include <limits.h>
|
||||||
#include <linux/err.h>
|
#include <linux/err.h>
|
||||||
#include <linux/btf.h>
|
#include <linux/btf.h>
|
||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
@ -117,14 +118,14 @@ struct btf_dump {
|
|||||||
struct btf_dump_data *typed_dump;
|
struct btf_dump_data *typed_dump;
|
||||||
};
|
};
|
||||||
|
|
||||||
static size_t str_hash_fn(const void *key, void *ctx)
|
static size_t str_hash_fn(long key, void *ctx)
|
||||||
{
|
{
|
||||||
return str_hash(key);
|
return str_hash((void *)key);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool str_equal_fn(const void *a, const void *b, void *ctx)
|
static bool str_equal_fn(long a, long b, void *ctx)
|
||||||
{
|
{
|
||||||
return strcmp(a, b) == 0;
|
return strcmp((void *)a, (void *)b) == 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static const char *btf_name_of(const struct btf_dump *d, __u32 name_off)
|
static const char *btf_name_of(const struct btf_dump *d, __u32 name_off)
|
||||||
@ -144,15 +145,17 @@ static void btf_dump_printf(const struct btf_dump *d, const char *fmt, ...)
|
|||||||
static int btf_dump_mark_referenced(struct btf_dump *d);
|
static int btf_dump_mark_referenced(struct btf_dump *d);
|
||||||
static int btf_dump_resize(struct btf_dump *d);
|
static int btf_dump_resize(struct btf_dump *d);
|
||||||
|
|
||||||
DEFAULT_VERSION(btf_dump__new_v0_6_0, btf_dump__new, LIBBPF_0.6.0)
|
struct btf_dump *btf_dump__new(const struct btf *btf,
|
||||||
struct btf_dump *btf_dump__new_v0_6_0(const struct btf *btf,
|
btf_dump_printf_fn_t printf_fn,
|
||||||
btf_dump_printf_fn_t printf_fn,
|
void *ctx,
|
||||||
void *ctx,
|
const struct btf_dump_opts *opts)
|
||||||
const struct btf_dump_opts *opts)
|
|
||||||
{
|
{
|
||||||
struct btf_dump *d;
|
struct btf_dump *d;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
|
if (!OPTS_VALID(opts, btf_dump_opts))
|
||||||
|
return libbpf_err_ptr(-EINVAL);
|
||||||
|
|
||||||
if (!printf_fn)
|
if (!printf_fn)
|
||||||
return libbpf_err_ptr(-EINVAL);
|
return libbpf_err_ptr(-EINVAL);
|
||||||
|
|
||||||
@ -188,17 +191,6 @@ err:
|
|||||||
return libbpf_err_ptr(err);
|
return libbpf_err_ptr(err);
|
||||||
}
|
}
|
||||||
|
|
||||||
COMPAT_VERSION(btf_dump__new_deprecated, btf_dump__new, LIBBPF_0.0.4)
|
|
||||||
struct btf_dump *btf_dump__new_deprecated(const struct btf *btf,
|
|
||||||
const struct btf_ext *btf_ext,
|
|
||||||
const struct btf_dump_opts *opts,
|
|
||||||
btf_dump_printf_fn_t printf_fn)
|
|
||||||
{
|
|
||||||
if (!printf_fn)
|
|
||||||
return libbpf_err_ptr(-EINVAL);
|
|
||||||
return btf_dump__new_v0_6_0(btf, printf_fn, opts ? opts->ctx : NULL, opts);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int btf_dump_resize(struct btf_dump *d)
|
static int btf_dump_resize(struct btf_dump *d)
|
||||||
{
|
{
|
||||||
int err, last_id = btf__type_cnt(d->btf) - 1;
|
int err, last_id = btf__type_cnt(d->btf) - 1;
|
||||||
@ -228,6 +220,17 @@ static int btf_dump_resize(struct btf_dump *d)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void btf_dump_free_names(struct hashmap *map)
|
||||||
|
{
|
||||||
|
size_t bkt;
|
||||||
|
struct hashmap_entry *cur;
|
||||||
|
|
||||||
|
hashmap__for_each_entry(map, cur, bkt)
|
||||||
|
free((void *)cur->pkey);
|
||||||
|
|
||||||
|
hashmap__free(map);
|
||||||
|
}
|
||||||
|
|
||||||
void btf_dump__free(struct btf_dump *d)
|
void btf_dump__free(struct btf_dump *d)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
@ -246,8 +249,8 @@ void btf_dump__free(struct btf_dump *d)
|
|||||||
free(d->cached_names);
|
free(d->cached_names);
|
||||||
free(d->emit_queue);
|
free(d->emit_queue);
|
||||||
free(d->decl_stack);
|
free(d->decl_stack);
|
||||||
hashmap__free(d->type_names);
|
btf_dump_free_names(d->type_names);
|
||||||
hashmap__free(d->ident_names);
|
btf_dump_free_names(d->ident_names);
|
||||||
|
|
||||||
free(d);
|
free(d);
|
||||||
}
|
}
|
||||||
@ -318,6 +321,7 @@ static int btf_dump_mark_referenced(struct btf_dump *d)
|
|||||||
switch (btf_kind(t)) {
|
switch (btf_kind(t)) {
|
||||||
case BTF_KIND_INT:
|
case BTF_KIND_INT:
|
||||||
case BTF_KIND_ENUM:
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
case BTF_KIND_FWD:
|
case BTF_KIND_FWD:
|
||||||
case BTF_KIND_FLOAT:
|
case BTF_KIND_FLOAT:
|
||||||
break;
|
break;
|
||||||
@ -538,6 +542,7 @@ static int btf_dump_order_type(struct btf_dump *d, __u32 id, bool through_ptr)
|
|||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
case BTF_KIND_ENUM:
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
case BTF_KIND_FWD:
|
case BTF_KIND_FWD:
|
||||||
/*
|
/*
|
||||||
* non-anonymous or non-referenced enums are top-level
|
* non-anonymous or non-referenced enums are top-level
|
||||||
@ -739,6 +744,7 @@ static void btf_dump_emit_type(struct btf_dump *d, __u32 id, __u32 cont_id)
|
|||||||
tstate->emit_state = EMITTED;
|
tstate->emit_state = EMITTED;
|
||||||
break;
|
break;
|
||||||
case BTF_KIND_ENUM:
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
if (top_level_def) {
|
if (top_level_def) {
|
||||||
btf_dump_emit_enum_def(d, id, t, 0);
|
btf_dump_emit_enum_def(d, id, t, 0);
|
||||||
btf_dump_printf(d, ";\n\n");
|
btf_dump_printf(d, ";\n\n");
|
||||||
@ -828,14 +834,9 @@ static bool btf_is_struct_packed(const struct btf *btf, __u32 id,
|
|||||||
const struct btf_type *t)
|
const struct btf_type *t)
|
||||||
{
|
{
|
||||||
const struct btf_member *m;
|
const struct btf_member *m;
|
||||||
int align, i, bit_sz;
|
int max_align = 1, align, i, bit_sz;
|
||||||
__u16 vlen;
|
__u16 vlen;
|
||||||
|
|
||||||
align = btf__align_of(btf, id);
|
|
||||||
/* size of a non-packed struct has to be a multiple of its alignment*/
|
|
||||||
if (align && t->size % align)
|
|
||||||
return true;
|
|
||||||
|
|
||||||
m = btf_members(t);
|
m = btf_members(t);
|
||||||
vlen = btf_vlen(t);
|
vlen = btf_vlen(t);
|
||||||
/* all non-bitfield fields have to be naturally aligned */
|
/* all non-bitfield fields have to be naturally aligned */
|
||||||
@ -844,8 +845,11 @@ static bool btf_is_struct_packed(const struct btf *btf, __u32 id,
|
|||||||
bit_sz = btf_member_bitfield_size(t, i);
|
bit_sz = btf_member_bitfield_size(t, i);
|
||||||
if (align && bit_sz == 0 && m->offset % (8 * align) != 0)
|
if (align && bit_sz == 0 && m->offset % (8 * align) != 0)
|
||||||
return true;
|
return true;
|
||||||
|
max_align = max(align, max_align);
|
||||||
}
|
}
|
||||||
|
/* size of a non-packed struct has to be a multiple of its alignment */
|
||||||
|
if (t->size % max_align != 0)
|
||||||
|
return true;
|
||||||
/*
|
/*
|
||||||
* if original struct was marked as packed, but its layout is
|
* if original struct was marked as packed, but its layout is
|
||||||
* naturally aligned, we'll detect that it's not packed
|
* naturally aligned, we'll detect that it's not packed
|
||||||
@ -853,44 +857,97 @@ static bool btf_is_struct_packed(const struct btf *btf, __u32 id,
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int chip_away_bits(int total, int at_most)
|
|
||||||
{
|
|
||||||
return total % at_most ? : at_most;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void btf_dump_emit_bit_padding(const struct btf_dump *d,
|
static void btf_dump_emit_bit_padding(const struct btf_dump *d,
|
||||||
int cur_off, int m_off, int m_bit_sz,
|
int cur_off, int next_off, int next_align,
|
||||||
int align, int lvl)
|
bool in_bitfield, int lvl)
|
||||||
{
|
{
|
||||||
int off_diff = m_off - cur_off;
|
const struct {
|
||||||
int ptr_bits = d->ptr_sz * 8;
|
const char *name;
|
||||||
|
int bits;
|
||||||
|
} pads[] = {
|
||||||
|
{"long", d->ptr_sz * 8}, {"int", 32}, {"short", 16}, {"char", 8}
|
||||||
|
};
|
||||||
|
int new_off, pad_bits, bits, i;
|
||||||
|
const char *pad_type;
|
||||||
|
|
||||||
if (off_diff <= 0)
|
if (cur_off >= next_off)
|
||||||
/* no gap */
|
return; /* no gap */
|
||||||
return;
|
|
||||||
if (m_bit_sz == 0 && off_diff < align * 8)
|
|
||||||
/* natural padding will take care of a gap */
|
|
||||||
return;
|
|
||||||
|
|
||||||
while (off_diff > 0) {
|
/* For filling out padding we want to take advantage of
|
||||||
const char *pad_type;
|
* natural alignment rules to minimize unnecessary explicit
|
||||||
int pad_bits;
|
* padding. First, we find the largest type (among long, int,
|
||||||
|
* short, or char) that can be used to force naturally aligned
|
||||||
|
* boundary. Once determined, we'll use such type to fill in
|
||||||
|
* the remaining padding gap. In some cases we can rely on
|
||||||
|
* compiler filling some gaps, but sometimes we need to force
|
||||||
|
* alignment to close natural alignment with markers like
|
||||||
|
* `long: 0` (this is always the case for bitfields). Note
|
||||||
|
* that even if struct itself has, let's say 4-byte alignment
|
||||||
|
* (i.e., it only uses up to int-aligned types), using `long:
|
||||||
|
* X;` explicit padding doesn't actually change struct's
|
||||||
|
* overall alignment requirements, but compiler does take into
|
||||||
|
* account that type's (long, in this example) natural
|
||||||
|
* alignment requirements when adding implicit padding. We use
|
||||||
|
* this fact heavily and don't worry about ruining correct
|
||||||
|
* struct alignment requirement.
|
||||||
|
*/
|
||||||
|
for (i = 0; i < ARRAY_SIZE(pads); i++) {
|
||||||
|
pad_bits = pads[i].bits;
|
||||||
|
pad_type = pads[i].name;
|
||||||
|
|
||||||
if (ptr_bits > 32 && off_diff > 32) {
|
new_off = roundup(cur_off, pad_bits);
|
||||||
pad_type = "long";
|
if (new_off <= next_off)
|
||||||
pad_bits = chip_away_bits(off_diff, ptr_bits);
|
break;
|
||||||
} else if (off_diff > 16) {
|
}
|
||||||
pad_type = "int";
|
|
||||||
pad_bits = chip_away_bits(off_diff, 32);
|
if (new_off > cur_off && new_off <= next_off) {
|
||||||
} else if (off_diff > 8) {
|
/* We need explicit `<type>: 0` aligning mark if next
|
||||||
pad_type = "short";
|
* field is right on alignment offset and its
|
||||||
pad_bits = chip_away_bits(off_diff, 16);
|
* alignment requirement is less strict than <type>'s
|
||||||
} else {
|
* alignment (so compiler won't naturally align to the
|
||||||
pad_type = "char";
|
* offset we expect), or if subsequent `<type>: X`,
|
||||||
pad_bits = chip_away_bits(off_diff, 8);
|
* will actually completely fit in the remaining hole,
|
||||||
|
* making compiler basically ignore `<type>: X`
|
||||||
|
* completely.
|
||||||
|
*/
|
||||||
|
if (in_bitfield ||
|
||||||
|
(new_off == next_off && roundup(cur_off, next_align * 8) != new_off) ||
|
||||||
|
(new_off != next_off && next_off - new_off <= new_off - cur_off))
|
||||||
|
/* but for bitfields we'll emit explicit bit count */
|
||||||
|
btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type,
|
||||||
|
in_bitfield ? new_off - cur_off : 0);
|
||||||
|
cur_off = new_off;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Now we know we start at naturally aligned offset for a chosen
|
||||||
|
* padding type (long, int, short, or char), and so the rest is just
|
||||||
|
* a straightforward filling of remaining padding gap with full
|
||||||
|
* `<type>: sizeof(<type>);` markers, except for the last one, which
|
||||||
|
* might need smaller than sizeof(<type>) padding.
|
||||||
|
*/
|
||||||
|
while (cur_off != next_off) {
|
||||||
|
bits = min(next_off - cur_off, pad_bits);
|
||||||
|
if (bits == pad_bits) {
|
||||||
|
btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type, pad_bits);
|
||||||
|
cur_off += bits;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
/* For the remainder padding that doesn't cover entire
|
||||||
|
* pad_type bit length, we pick the smallest necessary type.
|
||||||
|
* This is pure aesthetics, we could have just used `long`,
|
||||||
|
* but having smallest necessary one communicates better the
|
||||||
|
* scale of the padding gap.
|
||||||
|
*/
|
||||||
|
for (i = ARRAY_SIZE(pads) - 1; i >= 0; i--) {
|
||||||
|
pad_type = pads[i].name;
|
||||||
|
pad_bits = pads[i].bits;
|
||||||
|
if (pad_bits < bits)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type, bits);
|
||||||
|
cur_off += bits;
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type, pad_bits);
|
|
||||||
off_diff -= pad_bits;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -910,9 +967,11 @@ static void btf_dump_emit_struct_def(struct btf_dump *d,
|
|||||||
{
|
{
|
||||||
const struct btf_member *m = btf_members(t);
|
const struct btf_member *m = btf_members(t);
|
||||||
bool is_struct = btf_is_struct(t);
|
bool is_struct = btf_is_struct(t);
|
||||||
int align, i, packed, off = 0;
|
bool packed, prev_bitfield = false;
|
||||||
|
int align, i, off = 0;
|
||||||
__u16 vlen = btf_vlen(t);
|
__u16 vlen = btf_vlen(t);
|
||||||
|
|
||||||
|
align = btf__align_of(d->btf, id);
|
||||||
packed = is_struct ? btf_is_struct_packed(d->btf, id, t) : 0;
|
packed = is_struct ? btf_is_struct_packed(d->btf, id, t) : 0;
|
||||||
|
|
||||||
btf_dump_printf(d, "%s%s%s {",
|
btf_dump_printf(d, "%s%s%s {",
|
||||||
@ -922,37 +981,47 @@ static void btf_dump_emit_struct_def(struct btf_dump *d,
|
|||||||
|
|
||||||
for (i = 0; i < vlen; i++, m++) {
|
for (i = 0; i < vlen; i++, m++) {
|
||||||
const char *fname;
|
const char *fname;
|
||||||
int m_off, m_sz;
|
int m_off, m_sz, m_align;
|
||||||
|
bool in_bitfield;
|
||||||
|
|
||||||
fname = btf_name_of(d, m->name_off);
|
fname = btf_name_of(d, m->name_off);
|
||||||
m_sz = btf_member_bitfield_size(t, i);
|
m_sz = btf_member_bitfield_size(t, i);
|
||||||
m_off = btf_member_bit_offset(t, i);
|
m_off = btf_member_bit_offset(t, i);
|
||||||
align = packed ? 1 : btf__align_of(d->btf, m->type);
|
m_align = packed ? 1 : btf__align_of(d->btf, m->type);
|
||||||
|
|
||||||
btf_dump_emit_bit_padding(d, off, m_off, m_sz, align, lvl + 1);
|
in_bitfield = prev_bitfield && m_sz != 0;
|
||||||
|
|
||||||
|
btf_dump_emit_bit_padding(d, off, m_off, m_align, in_bitfield, lvl + 1);
|
||||||
btf_dump_printf(d, "\n%s", pfx(lvl + 1));
|
btf_dump_printf(d, "\n%s", pfx(lvl + 1));
|
||||||
btf_dump_emit_type_decl(d, m->type, fname, lvl + 1);
|
btf_dump_emit_type_decl(d, m->type, fname, lvl + 1);
|
||||||
|
|
||||||
if (m_sz) {
|
if (m_sz) {
|
||||||
btf_dump_printf(d, ": %d", m_sz);
|
btf_dump_printf(d, ": %d", m_sz);
|
||||||
off = m_off + m_sz;
|
off = m_off + m_sz;
|
||||||
|
prev_bitfield = true;
|
||||||
} else {
|
} else {
|
||||||
m_sz = max((__s64)0, btf__resolve_size(d->btf, m->type));
|
m_sz = max((__s64)0, btf__resolve_size(d->btf, m->type));
|
||||||
off = m_off + m_sz * 8;
|
off = m_off + m_sz * 8;
|
||||||
|
prev_bitfield = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
btf_dump_printf(d, ";");
|
btf_dump_printf(d, ";");
|
||||||
}
|
}
|
||||||
|
|
||||||
/* pad at the end, if necessary */
|
/* pad at the end, if necessary */
|
||||||
if (is_struct) {
|
if (is_struct)
|
||||||
align = packed ? 1 : btf__align_of(d->btf, id);
|
btf_dump_emit_bit_padding(d, off, t->size * 8, align, false, lvl + 1);
|
||||||
btf_dump_emit_bit_padding(d, off, t->size * 8, 0, align,
|
|
||||||
lvl + 1);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (vlen)
|
/*
|
||||||
|
* Keep `struct empty {}` on a single line,
|
||||||
|
* only print newline when there are regular or padding fields.
|
||||||
|
*/
|
||||||
|
if (vlen || t->size) {
|
||||||
btf_dump_printf(d, "\n");
|
btf_dump_printf(d, "\n");
|
||||||
btf_dump_printf(d, "%s}", pfx(lvl));
|
btf_dump_printf(d, "%s}", pfx(lvl));
|
||||||
|
} else {
|
||||||
|
btf_dump_printf(d, "}");
|
||||||
|
}
|
||||||
if (packed)
|
if (packed)
|
||||||
btf_dump_printf(d, " __attribute__((packed))");
|
btf_dump_printf(d, " __attribute__((packed))");
|
||||||
}
|
}
|
||||||
@ -989,38 +1058,118 @@ static void btf_dump_emit_enum_fwd(struct btf_dump *d, __u32 id,
|
|||||||
btf_dump_printf(d, "enum %s", btf_dump_type_name(d, id));
|
btf_dump_printf(d, "enum %s", btf_dump_type_name(d, id));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void btf_dump_emit_enum32_val(struct btf_dump *d,
|
||||||
|
const struct btf_type *t,
|
||||||
|
int lvl, __u16 vlen)
|
||||||
|
{
|
||||||
|
const struct btf_enum *v = btf_enum(t);
|
||||||
|
bool is_signed = btf_kflag(t);
|
||||||
|
const char *fmt_str;
|
||||||
|
const char *name;
|
||||||
|
size_t dup_cnt;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = 0; i < vlen; i++, v++) {
|
||||||
|
name = btf_name_of(d, v->name_off);
|
||||||
|
/* enumerators share namespace with typedef idents */
|
||||||
|
dup_cnt = btf_dump_name_dups(d, d->ident_names, name);
|
||||||
|
if (dup_cnt > 1) {
|
||||||
|
fmt_str = is_signed ? "\n%s%s___%zd = %d," : "\n%s%s___%zd = %u,";
|
||||||
|
btf_dump_printf(d, fmt_str, pfx(lvl + 1), name, dup_cnt, v->val);
|
||||||
|
} else {
|
||||||
|
fmt_str = is_signed ? "\n%s%s = %d," : "\n%s%s = %u,";
|
||||||
|
btf_dump_printf(d, fmt_str, pfx(lvl + 1), name, v->val);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void btf_dump_emit_enum64_val(struct btf_dump *d,
|
||||||
|
const struct btf_type *t,
|
||||||
|
int lvl, __u16 vlen)
|
||||||
|
{
|
||||||
|
const struct btf_enum64 *v = btf_enum64(t);
|
||||||
|
bool is_signed = btf_kflag(t);
|
||||||
|
const char *fmt_str;
|
||||||
|
const char *name;
|
||||||
|
size_t dup_cnt;
|
||||||
|
__u64 val;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = 0; i < vlen; i++, v++) {
|
||||||
|
name = btf_name_of(d, v->name_off);
|
||||||
|
dup_cnt = btf_dump_name_dups(d, d->ident_names, name);
|
||||||
|
val = btf_enum64_value(v);
|
||||||
|
if (dup_cnt > 1) {
|
||||||
|
fmt_str = is_signed ? "\n%s%s___%zd = %lldLL,"
|
||||||
|
: "\n%s%s___%zd = %lluULL,";
|
||||||
|
btf_dump_printf(d, fmt_str,
|
||||||
|
pfx(lvl + 1), name, dup_cnt,
|
||||||
|
(unsigned long long)val);
|
||||||
|
} else {
|
||||||
|
fmt_str = is_signed ? "\n%s%s = %lldLL,"
|
||||||
|
: "\n%s%s = %lluULL,";
|
||||||
|
btf_dump_printf(d, fmt_str,
|
||||||
|
pfx(lvl + 1), name,
|
||||||
|
(unsigned long long)val);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
static void btf_dump_emit_enum_def(struct btf_dump *d, __u32 id,
|
static void btf_dump_emit_enum_def(struct btf_dump *d, __u32 id,
|
||||||
const struct btf_type *t,
|
const struct btf_type *t,
|
||||||
int lvl)
|
int lvl)
|
||||||
{
|
{
|
||||||
const struct btf_enum *v = btf_enum(t);
|
|
||||||
__u16 vlen = btf_vlen(t);
|
__u16 vlen = btf_vlen(t);
|
||||||
const char *name;
|
|
||||||
size_t dup_cnt;
|
|
||||||
int i;
|
|
||||||
|
|
||||||
btf_dump_printf(d, "enum%s%s",
|
btf_dump_printf(d, "enum%s%s",
|
||||||
t->name_off ? " " : "",
|
t->name_off ? " " : "",
|
||||||
btf_dump_type_name(d, id));
|
btf_dump_type_name(d, id));
|
||||||
|
|
||||||
if (vlen) {
|
if (!vlen)
|
||||||
btf_dump_printf(d, " {");
|
return;
|
||||||
for (i = 0; i < vlen; i++, v++) {
|
|
||||||
name = btf_name_of(d, v->name_off);
|
btf_dump_printf(d, " {");
|
||||||
/* enumerators share namespace with typedef idents */
|
if (btf_is_enum(t))
|
||||||
dup_cnt = btf_dump_name_dups(d, d->ident_names, name);
|
btf_dump_emit_enum32_val(d, t, lvl, vlen);
|
||||||
if (dup_cnt > 1) {
|
else
|
||||||
btf_dump_printf(d, "\n%s%s___%zu = %u,",
|
btf_dump_emit_enum64_val(d, t, lvl, vlen);
|
||||||
pfx(lvl + 1), name, dup_cnt,
|
btf_dump_printf(d, "\n%s}", pfx(lvl));
|
||||||
(__u32)v->val);
|
|
||||||
} else {
|
/* special case enums with special sizes */
|
||||||
btf_dump_printf(d, "\n%s%s = %u,",
|
if (t->size == 1) {
|
||||||
pfx(lvl + 1), name,
|
/* one-byte enums can be forced with mode(byte) attribute */
|
||||||
(__u32)v->val);
|
btf_dump_printf(d, " __attribute__((mode(byte)))");
|
||||||
|
} else if (t->size == 8 && d->ptr_sz == 8) {
|
||||||
|
/* enum can be 8-byte sized if one of the enumerator values
|
||||||
|
* doesn't fit in 32-bit integer, or by adding mode(word)
|
||||||
|
* attribute (but probably only on 64-bit architectures); do
|
||||||
|
* our best here to try to satisfy the contract without adding
|
||||||
|
* unnecessary attributes
|
||||||
|
*/
|
||||||
|
bool needs_word_mode;
|
||||||
|
|
||||||
|
if (btf_is_enum(t)) {
|
||||||
|
/* enum can't represent 64-bit values, so we need word mode */
|
||||||
|
needs_word_mode = true;
|
||||||
|
} else {
|
||||||
|
/* enum64 needs mode(word) if none of its values has
|
||||||
|
* non-zero upper 32-bits (which means that all values
|
||||||
|
* fit in 32-bit integers and won't cause compiler to
|
||||||
|
* bump enum to be 64-bit naturally
|
||||||
|
*/
|
||||||
|
int i;
|
||||||
|
|
||||||
|
needs_word_mode = true;
|
||||||
|
for (i = 0; i < vlen; i++) {
|
||||||
|
if (btf_enum64(t)[i].val_hi32 != 0) {
|
||||||
|
needs_word_mode = false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
btf_dump_printf(d, "\n%s}", pfx(lvl));
|
if (needs_word_mode)
|
||||||
|
btf_dump_printf(d, " __attribute__((mode(word)))");
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void btf_dump_emit_fwd_def(struct btf_dump *d, __u32 id,
|
static void btf_dump_emit_fwd_def(struct btf_dump *d, __u32 id,
|
||||||
@ -1178,6 +1327,7 @@ skip_mod:
|
|||||||
break;
|
break;
|
||||||
case BTF_KIND_INT:
|
case BTF_KIND_INT:
|
||||||
case BTF_KIND_ENUM:
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
case BTF_KIND_FWD:
|
case BTF_KIND_FWD:
|
||||||
case BTF_KIND_STRUCT:
|
case BTF_KIND_STRUCT:
|
||||||
case BTF_KIND_UNION:
|
case BTF_KIND_UNION:
|
||||||
@ -1312,6 +1462,7 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
|
|||||||
btf_dump_emit_struct_fwd(d, id, t);
|
btf_dump_emit_struct_fwd(d, id, t);
|
||||||
break;
|
break;
|
||||||
case BTF_KIND_ENUM:
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
btf_dump_emit_mods(d, decls);
|
btf_dump_emit_mods(d, decls);
|
||||||
/* inline anonymous enum */
|
/* inline anonymous enum */
|
||||||
if (t->name_off == 0 && !d->skip_anon_defs)
|
if (t->name_off == 0 && !d->skip_anon_defs)
|
||||||
@ -1481,11 +1632,22 @@ static void btf_dump_emit_type_cast(struct btf_dump *d, __u32 id,
|
|||||||
static size_t btf_dump_name_dups(struct btf_dump *d, struct hashmap *name_map,
|
static size_t btf_dump_name_dups(struct btf_dump *d, struct hashmap *name_map,
|
||||||
const char *orig_name)
|
const char *orig_name)
|
||||||
{
|
{
|
||||||
|
char *old_name, *new_name;
|
||||||
size_t dup_cnt = 0;
|
size_t dup_cnt = 0;
|
||||||
|
int err;
|
||||||
|
|
||||||
hashmap__find(name_map, orig_name, (void **)&dup_cnt);
|
new_name = strdup(orig_name);
|
||||||
|
if (!new_name)
|
||||||
|
return 1;
|
||||||
|
|
||||||
|
(void)hashmap__find(name_map, orig_name, &dup_cnt);
|
||||||
dup_cnt++;
|
dup_cnt++;
|
||||||
hashmap__set(name_map, orig_name, (void *)dup_cnt, NULL, NULL);
|
|
||||||
|
err = hashmap__set(name_map, new_name, dup_cnt, &old_name, NULL);
|
||||||
|
if (err)
|
||||||
|
free(new_name);
|
||||||
|
|
||||||
|
free(old_name);
|
||||||
|
|
||||||
return dup_cnt;
|
return dup_cnt;
|
||||||
}
|
}
|
||||||
@ -1505,6 +1667,11 @@ static const char *btf_dump_resolve_name(struct btf_dump *d, __u32 id,
|
|||||||
if (s->name_resolved)
|
if (s->name_resolved)
|
||||||
return *cached_name ? *cached_name : orig_name;
|
return *cached_name ? *cached_name : orig_name;
|
||||||
|
|
||||||
|
if (btf_is_fwd(t) || (btf_is_enum(t) && btf_vlen(t) == 0)) {
|
||||||
|
s->name_resolved = 1;
|
||||||
|
return orig_name;
|
||||||
|
}
|
||||||
|
|
||||||
dup_cnt = btf_dump_name_dups(d, name_map, orig_name);
|
dup_cnt = btf_dump_name_dups(d, name_map, orig_name);
|
||||||
if (dup_cnt > 1) {
|
if (dup_cnt > 1) {
|
||||||
const size_t max_len = 256;
|
const size_t max_len = 256;
|
||||||
@ -1919,7 +2086,7 @@ static int btf_dump_struct_data(struct btf_dump *d,
|
|||||||
{
|
{
|
||||||
const struct btf_member *m = btf_members(t);
|
const struct btf_member *m = btf_members(t);
|
||||||
__u16 n = btf_vlen(t);
|
__u16 n = btf_vlen(t);
|
||||||
int i, err;
|
int i, err = 0;
|
||||||
|
|
||||||
/* note that we increment depth before calling btf_dump_print() below;
|
/* note that we increment depth before calling btf_dump_print() below;
|
||||||
* this is intentional. btf_dump_data_newline() will not print a
|
* this is intentional. btf_dump_data_newline() will not print a
|
||||||
@ -1983,7 +2150,8 @@ static int btf_dump_get_enum_value(struct btf_dump *d,
|
|||||||
__u32 id,
|
__u32 id,
|
||||||
__s64 *value)
|
__s64 *value)
|
||||||
{
|
{
|
||||||
/* handle unaligned enum value */
|
bool is_signed = btf_kflag(t);
|
||||||
|
|
||||||
if (!ptr_is_aligned(d->btf, id, data)) {
|
if (!ptr_is_aligned(d->btf, id, data)) {
|
||||||
__u64 val;
|
__u64 val;
|
||||||
int err;
|
int err;
|
||||||
@ -2000,13 +2168,13 @@ static int btf_dump_get_enum_value(struct btf_dump *d,
|
|||||||
*value = *(__s64 *)data;
|
*value = *(__s64 *)data;
|
||||||
return 0;
|
return 0;
|
||||||
case 4:
|
case 4:
|
||||||
*value = *(__s32 *)data;
|
*value = is_signed ? (__s64)*(__s32 *)data : *(__u32 *)data;
|
||||||
return 0;
|
return 0;
|
||||||
case 2:
|
case 2:
|
||||||
*value = *(__s16 *)data;
|
*value = is_signed ? *(__s16 *)data : *(__u16 *)data;
|
||||||
return 0;
|
return 0;
|
||||||
case 1:
|
case 1:
|
||||||
*value = *(__s8 *)data;
|
*value = is_signed ? *(__s8 *)data : *(__u8 *)data;
|
||||||
return 0;
|
return 0;
|
||||||
default:
|
default:
|
||||||
pr_warn("unexpected size %d for enum, id:[%u]\n", t->size, id);
|
pr_warn("unexpected size %d for enum, id:[%u]\n", t->size, id);
|
||||||
@ -2019,7 +2187,7 @@ static int btf_dump_enum_data(struct btf_dump *d,
|
|||||||
__u32 id,
|
__u32 id,
|
||||||
const void *data)
|
const void *data)
|
||||||
{
|
{
|
||||||
const struct btf_enum *e;
|
bool is_signed;
|
||||||
__s64 value;
|
__s64 value;
|
||||||
int i, err;
|
int i, err;
|
||||||
|
|
||||||
@ -2027,14 +2195,31 @@ static int btf_dump_enum_data(struct btf_dump *d,
|
|||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
for (i = 0, e = btf_enum(t); i < btf_vlen(t); i++, e++) {
|
is_signed = btf_kflag(t);
|
||||||
if (value != e->val)
|
if (btf_is_enum(t)) {
|
||||||
continue;
|
const struct btf_enum *e;
|
||||||
btf_dump_type_values(d, "%s", btf_name_of(d, e->name_off));
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
btf_dump_type_values(d, "%d", value);
|
for (i = 0, e = btf_enum(t); i < btf_vlen(t); i++, e++) {
|
||||||
|
if (value != e->val)
|
||||||
|
continue;
|
||||||
|
btf_dump_type_values(d, "%s", btf_name_of(d, e->name_off));
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
btf_dump_type_values(d, is_signed ? "%d" : "%u", value);
|
||||||
|
} else {
|
||||||
|
const struct btf_enum64 *e;
|
||||||
|
|
||||||
|
for (i = 0, e = btf_enum64(t); i < btf_vlen(t); i++, e++) {
|
||||||
|
if (value != btf_enum64_value(e))
|
||||||
|
continue;
|
||||||
|
btf_dump_type_values(d, "%s", btf_name_of(d, e->name_off));
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
btf_dump_type_values(d, is_signed ? "%lldLL" : "%lluULL",
|
||||||
|
(unsigned long long)value);
|
||||||
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -2094,6 +2279,7 @@ static int btf_dump_type_data_check_overflow(struct btf_dump *d,
|
|||||||
case BTF_KIND_FLOAT:
|
case BTF_KIND_FLOAT:
|
||||||
case BTF_KIND_PTR:
|
case BTF_KIND_PTR:
|
||||||
case BTF_KIND_ENUM:
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
if (data + bits_offset / 8 + size > d->typed_dump->data_end)
|
if (data + bits_offset / 8 + size > d->typed_dump->data_end)
|
||||||
return -E2BIG;
|
return -E2BIG;
|
||||||
break;
|
break;
|
||||||
@ -2198,6 +2384,7 @@ static int btf_dump_type_data_check_zero(struct btf_dump *d,
|
|||||||
return -ENODATA;
|
return -ENODATA;
|
||||||
}
|
}
|
||||||
case BTF_KIND_ENUM:
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
err = btf_dump_get_enum_value(d, t, data, id, &value);
|
err = btf_dump_get_enum_value(d, t, data, id, &value);
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
@ -2270,6 +2457,7 @@ static int btf_dump_dump_type_data(struct btf_dump *d,
|
|||||||
err = btf_dump_struct_data(d, t, id, data);
|
err = btf_dump_struct_data(d, t, id, data);
|
||||||
break;
|
break;
|
||||||
case BTF_KIND_ENUM:
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
/* handle bitfield and int enum values */
|
/* handle bitfield and int enum values */
|
||||||
if (bit_sz) {
|
if (bit_sz) {
|
||||||
__u64 print_num;
|
__u64 print_num;
|
||||||
@ -2320,7 +2508,7 @@ int btf_dump__dump_type_data(struct btf_dump *d, __u32 id,
|
|||||||
d->typed_dump->indent_lvl = OPTS_GET(opts, indent_level, 0);
|
d->typed_dump->indent_lvl = OPTS_GET(opts, indent_level, 0);
|
||||||
|
|
||||||
/* default indent string is a tab */
|
/* default indent string is a tab */
|
||||||
if (!opts->indent_str)
|
if (!OPTS_GET(opts, indent_str, NULL))
|
||||||
d->typed_dump->indent_str[0] = '\t';
|
d->typed_dump->indent_str[0] = '\t';
|
||||||
else
|
else
|
||||||
libbpf_strlcpy(d->typed_dump->indent_str, opts->indent_str,
|
libbpf_strlcpy(d->typed_dump->indent_str, opts->indent_str,
|
||||||
|
@ -533,7 +533,7 @@ void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *attach_name,
|
|||||||
gen->attach_kind = kind;
|
gen->attach_kind = kind;
|
||||||
ret = snprintf(gen->attach_target, sizeof(gen->attach_target), "%s%s",
|
ret = snprintf(gen->attach_target, sizeof(gen->attach_target), "%s%s",
|
||||||
prefix, attach_name);
|
prefix, attach_name);
|
||||||
if (ret == sizeof(gen->attach_target))
|
if (ret >= sizeof(gen->attach_target))
|
||||||
gen->error = -ENOSPC;
|
gen->error = -ENOSPC;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1043,18 +1043,27 @@ void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *pvalue,
|
|||||||
value = add_data(gen, pvalue, value_size);
|
value = add_data(gen, pvalue, value_size);
|
||||||
key = add_data(gen, &zero, sizeof(zero));
|
key = add_data(gen, &zero, sizeof(zero));
|
||||||
|
|
||||||
/* if (map_desc[map_idx].initial_value)
|
/* if (map_desc[map_idx].initial_value) {
|
||||||
* copy_from_user(value, initial_value, value_size);
|
* if (ctx->flags & BPF_SKEL_KERNEL)
|
||||||
|
* bpf_probe_read_kernel(value, value_size, initial_value);
|
||||||
|
* else
|
||||||
|
* bpf_copy_from_user(value, value_size, initial_value);
|
||||||
|
* }
|
||||||
*/
|
*/
|
||||||
emit(gen, BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_6,
|
emit(gen, BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_6,
|
||||||
sizeof(struct bpf_loader_ctx) +
|
sizeof(struct bpf_loader_ctx) +
|
||||||
sizeof(struct bpf_map_desc) * map_idx +
|
sizeof(struct bpf_map_desc) * map_idx +
|
||||||
offsetof(struct bpf_map_desc, initial_value)));
|
offsetof(struct bpf_map_desc, initial_value)));
|
||||||
emit(gen, BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0, 4));
|
emit(gen, BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0, 8));
|
||||||
emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE,
|
emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE,
|
||||||
0, 0, 0, value));
|
0, 0, 0, value));
|
||||||
emit(gen, BPF_MOV64_IMM(BPF_REG_2, value_size));
|
emit(gen, BPF_MOV64_IMM(BPF_REG_2, value_size));
|
||||||
|
emit(gen, BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6,
|
||||||
|
offsetof(struct bpf_loader_ctx, flags)));
|
||||||
|
emit(gen, BPF_JMP_IMM(BPF_JSET, BPF_REG_0, BPF_SKEL_KERNEL, 2));
|
||||||
emit(gen, BPF_EMIT_CALL(BPF_FUNC_copy_from_user));
|
emit(gen, BPF_EMIT_CALL(BPF_FUNC_copy_from_user));
|
||||||
|
emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, 1));
|
||||||
|
emit(gen, BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel));
|
||||||
|
|
||||||
map_update_attr = add_data(gen, &attr, attr_size);
|
map_update_attr = add_data(gen, &attr, attr_size);
|
||||||
move_blob2blob(gen, attr_field(map_update_attr, map_fd), 4,
|
move_blob2blob(gen, attr_field(map_update_attr, map_fd), 4,
|
||||||
|
@ -128,7 +128,7 @@ static int hashmap_grow(struct hashmap *map)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static bool hashmap_find_entry(const struct hashmap *map,
|
static bool hashmap_find_entry(const struct hashmap *map,
|
||||||
const void *key, size_t hash,
|
const long key, size_t hash,
|
||||||
struct hashmap_entry ***pprev,
|
struct hashmap_entry ***pprev,
|
||||||
struct hashmap_entry **entry)
|
struct hashmap_entry **entry)
|
||||||
{
|
{
|
||||||
@ -151,18 +151,18 @@ static bool hashmap_find_entry(const struct hashmap *map,
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
int hashmap__insert(struct hashmap *map, const void *key, void *value,
|
int hashmap_insert(struct hashmap *map, long key, long value,
|
||||||
enum hashmap_insert_strategy strategy,
|
enum hashmap_insert_strategy strategy,
|
||||||
const void **old_key, void **old_value)
|
long *old_key, long *old_value)
|
||||||
{
|
{
|
||||||
struct hashmap_entry *entry;
|
struct hashmap_entry *entry;
|
||||||
size_t h;
|
size_t h;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
if (old_key)
|
if (old_key)
|
||||||
*old_key = NULL;
|
*old_key = 0;
|
||||||
if (old_value)
|
if (old_value)
|
||||||
*old_value = NULL;
|
*old_value = 0;
|
||||||
|
|
||||||
h = hash_bits(map->hash_fn(key, map->ctx), map->cap_bits);
|
h = hash_bits(map->hash_fn(key, map->ctx), map->cap_bits);
|
||||||
if (strategy != HASHMAP_APPEND &&
|
if (strategy != HASHMAP_APPEND &&
|
||||||
@ -203,7 +203,7 @@ int hashmap__insert(struct hashmap *map, const void *key, void *value,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool hashmap__find(const struct hashmap *map, const void *key, void **value)
|
bool hashmap_find(const struct hashmap *map, long key, long *value)
|
||||||
{
|
{
|
||||||
struct hashmap_entry *entry;
|
struct hashmap_entry *entry;
|
||||||
size_t h;
|
size_t h;
|
||||||
@ -217,8 +217,8 @@ bool hashmap__find(const struct hashmap *map, const void *key, void **value)
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool hashmap__delete(struct hashmap *map, const void *key,
|
bool hashmap_delete(struct hashmap *map, long key,
|
||||||
const void **old_key, void **old_value)
|
long *old_key, long *old_value)
|
||||||
{
|
{
|
||||||
struct hashmap_entry **pprev, *entry;
|
struct hashmap_entry **pprev, *entry;
|
||||||
size_t h;
|
size_t h;
|
||||||
|
@ -40,12 +40,32 @@ static inline size_t str_hash(const char *s)
|
|||||||
return h;
|
return h;
|
||||||
}
|
}
|
||||||
|
|
||||||
typedef size_t (*hashmap_hash_fn)(const void *key, void *ctx);
|
typedef size_t (*hashmap_hash_fn)(long key, void *ctx);
|
||||||
typedef bool (*hashmap_equal_fn)(const void *key1, const void *key2, void *ctx);
|
typedef bool (*hashmap_equal_fn)(long key1, long key2, void *ctx);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Hashmap interface is polymorphic, keys and values could be either
|
||||||
|
* long-sized integers or pointers, this is achieved as follows:
|
||||||
|
* - interface functions that operate on keys and values are hidden
|
||||||
|
* behind auxiliary macros, e.g. hashmap_insert <-> hashmap__insert;
|
||||||
|
* - these auxiliary macros cast the key and value parameters as
|
||||||
|
* long or long *, so the user does not have to specify the casts explicitly;
|
||||||
|
* - for pointer parameters (e.g. old_key) the size of the pointed
|
||||||
|
* type is verified by hashmap_cast_ptr using _Static_assert;
|
||||||
|
* - when iterating using hashmap__for_each_* forms
|
||||||
|
* hasmap_entry->key should be used for integer keys and
|
||||||
|
* hasmap_entry->pkey should be used for pointer keys,
|
||||||
|
* same goes for values.
|
||||||
|
*/
|
||||||
struct hashmap_entry {
|
struct hashmap_entry {
|
||||||
const void *key;
|
union {
|
||||||
void *value;
|
long key;
|
||||||
|
const void *pkey;
|
||||||
|
};
|
||||||
|
union {
|
||||||
|
long value;
|
||||||
|
void *pvalue;
|
||||||
|
};
|
||||||
struct hashmap_entry *next;
|
struct hashmap_entry *next;
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -102,6 +122,13 @@ enum hashmap_insert_strategy {
|
|||||||
HASHMAP_APPEND,
|
HASHMAP_APPEND,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
#define hashmap_cast_ptr(p) ({ \
|
||||||
|
_Static_assert((__builtin_constant_p((p)) ? (p) == NULL : 0) || \
|
||||||
|
sizeof(*(p)) == sizeof(long), \
|
||||||
|
#p " pointee should be a long-sized integer or a pointer"); \
|
||||||
|
(long *)(p); \
|
||||||
|
})
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* hashmap__insert() adds key/value entry w/ various semantics, depending on
|
* hashmap__insert() adds key/value entry w/ various semantics, depending on
|
||||||
* provided strategy value. If a given key/value pair replaced already
|
* provided strategy value. If a given key/value pair replaced already
|
||||||
@ -109,42 +136,38 @@ enum hashmap_insert_strategy {
|
|||||||
* through old_key and old_value to allow calling code do proper memory
|
* through old_key and old_value to allow calling code do proper memory
|
||||||
* management.
|
* management.
|
||||||
*/
|
*/
|
||||||
int hashmap__insert(struct hashmap *map, const void *key, void *value,
|
int hashmap_insert(struct hashmap *map, long key, long value,
|
||||||
enum hashmap_insert_strategy strategy,
|
enum hashmap_insert_strategy strategy,
|
||||||
const void **old_key, void **old_value);
|
long *old_key, long *old_value);
|
||||||
|
|
||||||
static inline int hashmap__add(struct hashmap *map,
|
#define hashmap__insert(map, key, value, strategy, old_key, old_value) \
|
||||||
const void *key, void *value)
|
hashmap_insert((map), (long)(key), (long)(value), (strategy), \
|
||||||
{
|
hashmap_cast_ptr(old_key), \
|
||||||
return hashmap__insert(map, key, value, HASHMAP_ADD, NULL, NULL);
|
hashmap_cast_ptr(old_value))
|
||||||
}
|
|
||||||
|
|
||||||
static inline int hashmap__set(struct hashmap *map,
|
#define hashmap__add(map, key, value) \
|
||||||
const void *key, void *value,
|
hashmap__insert((map), (key), (value), HASHMAP_ADD, NULL, NULL)
|
||||||
const void **old_key, void **old_value)
|
|
||||||
{
|
|
||||||
return hashmap__insert(map, key, value, HASHMAP_SET,
|
|
||||||
old_key, old_value);
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline int hashmap__update(struct hashmap *map,
|
#define hashmap__set(map, key, value, old_key, old_value) \
|
||||||
const void *key, void *value,
|
hashmap__insert((map), (key), (value), HASHMAP_SET, (old_key), (old_value))
|
||||||
const void **old_key, void **old_value)
|
|
||||||
{
|
|
||||||
return hashmap__insert(map, key, value, HASHMAP_UPDATE,
|
|
||||||
old_key, old_value);
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline int hashmap__append(struct hashmap *map,
|
#define hashmap__update(map, key, value, old_key, old_value) \
|
||||||
const void *key, void *value)
|
hashmap__insert((map), (key), (value), HASHMAP_UPDATE, (old_key), (old_value))
|
||||||
{
|
|
||||||
return hashmap__insert(map, key, value, HASHMAP_APPEND, NULL, NULL);
|
|
||||||
}
|
|
||||||
|
|
||||||
bool hashmap__delete(struct hashmap *map, const void *key,
|
#define hashmap__append(map, key, value) \
|
||||||
const void **old_key, void **old_value);
|
hashmap__insert((map), (key), (value), HASHMAP_APPEND, NULL, NULL)
|
||||||
|
|
||||||
bool hashmap__find(const struct hashmap *map, const void *key, void **value);
|
bool hashmap_delete(struct hashmap *map, long key, long *old_key, long *old_value);
|
||||||
|
|
||||||
|
#define hashmap__delete(map, key, old_key, old_value) \
|
||||||
|
hashmap_delete((map), (long)(key), \
|
||||||
|
hashmap_cast_ptr(old_key), \
|
||||||
|
hashmap_cast_ptr(old_value))
|
||||||
|
|
||||||
|
bool hashmap_find(const struct hashmap *map, long key, long *value);
|
||||||
|
|
||||||
|
#define hashmap__find(map, key, value) \
|
||||||
|
hashmap_find((map), (long)(key), hashmap_cast_ptr(value))
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* hashmap__for_each_entry - iterate over all entries in hashmap
|
* hashmap__for_each_entry - iterate over all entries in hashmap
|
||||||
|
4720
src/libbpf.c
1104
src/libbpf.h
161
src/libbpf.map
@ -1,29 +1,14 @@
|
|||||||
LIBBPF_0.0.1 {
|
LIBBPF_0.0.1 {
|
||||||
global:
|
global:
|
||||||
bpf_btf_get_fd_by_id;
|
bpf_btf_get_fd_by_id;
|
||||||
bpf_create_map;
|
|
||||||
bpf_create_map_in_map;
|
|
||||||
bpf_create_map_in_map_node;
|
|
||||||
bpf_create_map_name;
|
|
||||||
bpf_create_map_node;
|
|
||||||
bpf_create_map_xattr;
|
|
||||||
bpf_load_btf;
|
|
||||||
bpf_load_program;
|
|
||||||
bpf_load_program_xattr;
|
|
||||||
bpf_map__btf_key_type_id;
|
bpf_map__btf_key_type_id;
|
||||||
bpf_map__btf_value_type_id;
|
bpf_map__btf_value_type_id;
|
||||||
bpf_map__def;
|
|
||||||
bpf_map__fd;
|
bpf_map__fd;
|
||||||
bpf_map__is_offload_neutral;
|
|
||||||
bpf_map__name;
|
bpf_map__name;
|
||||||
bpf_map__next;
|
|
||||||
bpf_map__pin;
|
bpf_map__pin;
|
||||||
bpf_map__prev;
|
|
||||||
bpf_map__priv;
|
|
||||||
bpf_map__reuse_fd;
|
bpf_map__reuse_fd;
|
||||||
bpf_map__set_ifindex;
|
bpf_map__set_ifindex;
|
||||||
bpf_map__set_inner_map_fd;
|
bpf_map__set_inner_map_fd;
|
||||||
bpf_map__set_priv;
|
|
||||||
bpf_map__unpin;
|
bpf_map__unpin;
|
||||||
bpf_map_delete_elem;
|
bpf_map_delete_elem;
|
||||||
bpf_map_get_fd_by_id;
|
bpf_map_get_fd_by_id;
|
||||||
@ -38,79 +23,37 @@ LIBBPF_0.0.1 {
|
|||||||
bpf_object__btf_fd;
|
bpf_object__btf_fd;
|
||||||
bpf_object__close;
|
bpf_object__close;
|
||||||
bpf_object__find_map_by_name;
|
bpf_object__find_map_by_name;
|
||||||
bpf_object__find_map_by_offset;
|
|
||||||
bpf_object__find_program_by_title;
|
|
||||||
bpf_object__kversion;
|
bpf_object__kversion;
|
||||||
bpf_object__load;
|
bpf_object__load;
|
||||||
bpf_object__name;
|
bpf_object__name;
|
||||||
bpf_object__next;
|
|
||||||
bpf_object__open;
|
bpf_object__open;
|
||||||
bpf_object__open_buffer;
|
|
||||||
bpf_object__open_xattr;
|
|
||||||
bpf_object__pin;
|
bpf_object__pin;
|
||||||
bpf_object__pin_maps;
|
bpf_object__pin_maps;
|
||||||
bpf_object__pin_programs;
|
bpf_object__pin_programs;
|
||||||
bpf_object__priv;
|
|
||||||
bpf_object__set_priv;
|
|
||||||
bpf_object__unload;
|
|
||||||
bpf_object__unpin_maps;
|
bpf_object__unpin_maps;
|
||||||
bpf_object__unpin_programs;
|
bpf_object__unpin_programs;
|
||||||
bpf_perf_event_read_simple;
|
|
||||||
bpf_prog_attach;
|
bpf_prog_attach;
|
||||||
bpf_prog_detach;
|
bpf_prog_detach;
|
||||||
bpf_prog_detach2;
|
bpf_prog_detach2;
|
||||||
bpf_prog_get_fd_by_id;
|
bpf_prog_get_fd_by_id;
|
||||||
bpf_prog_get_next_id;
|
bpf_prog_get_next_id;
|
||||||
bpf_prog_load;
|
|
||||||
bpf_prog_load_xattr;
|
|
||||||
bpf_prog_query;
|
bpf_prog_query;
|
||||||
bpf_prog_test_run;
|
|
||||||
bpf_prog_test_run_xattr;
|
|
||||||
bpf_program__fd;
|
bpf_program__fd;
|
||||||
bpf_program__is_kprobe;
|
|
||||||
bpf_program__is_perf_event;
|
|
||||||
bpf_program__is_raw_tracepoint;
|
|
||||||
bpf_program__is_sched_act;
|
|
||||||
bpf_program__is_sched_cls;
|
|
||||||
bpf_program__is_socket_filter;
|
|
||||||
bpf_program__is_tracepoint;
|
|
||||||
bpf_program__is_xdp;
|
|
||||||
bpf_program__load;
|
|
||||||
bpf_program__next;
|
|
||||||
bpf_program__nth_fd;
|
|
||||||
bpf_program__pin;
|
bpf_program__pin;
|
||||||
bpf_program__pin_instance;
|
|
||||||
bpf_program__prev;
|
|
||||||
bpf_program__priv;
|
|
||||||
bpf_program__set_expected_attach_type;
|
bpf_program__set_expected_attach_type;
|
||||||
bpf_program__set_ifindex;
|
bpf_program__set_ifindex;
|
||||||
bpf_program__set_kprobe;
|
|
||||||
bpf_program__set_perf_event;
|
|
||||||
bpf_program__set_prep;
|
|
||||||
bpf_program__set_priv;
|
|
||||||
bpf_program__set_raw_tracepoint;
|
|
||||||
bpf_program__set_sched_act;
|
|
||||||
bpf_program__set_sched_cls;
|
|
||||||
bpf_program__set_socket_filter;
|
|
||||||
bpf_program__set_tracepoint;
|
|
||||||
bpf_program__set_type;
|
bpf_program__set_type;
|
||||||
bpf_program__set_xdp;
|
|
||||||
bpf_program__title;
|
|
||||||
bpf_program__unload;
|
bpf_program__unload;
|
||||||
bpf_program__unpin;
|
bpf_program__unpin;
|
||||||
bpf_program__unpin_instance;
|
|
||||||
bpf_prog_linfo__free;
|
bpf_prog_linfo__free;
|
||||||
bpf_prog_linfo__new;
|
bpf_prog_linfo__new;
|
||||||
bpf_prog_linfo__lfind_addr_func;
|
bpf_prog_linfo__lfind_addr_func;
|
||||||
bpf_prog_linfo__lfind;
|
bpf_prog_linfo__lfind;
|
||||||
bpf_raw_tracepoint_open;
|
bpf_raw_tracepoint_open;
|
||||||
bpf_set_link_xdp_fd;
|
|
||||||
bpf_task_fd_query;
|
bpf_task_fd_query;
|
||||||
bpf_verify_program;
|
|
||||||
btf__fd;
|
btf__fd;
|
||||||
btf__find_by_name;
|
btf__find_by_name;
|
||||||
btf__free;
|
btf__free;
|
||||||
btf__get_from_id;
|
|
||||||
btf__name_by_offset;
|
btf__name_by_offset;
|
||||||
btf__new;
|
btf__new;
|
||||||
btf__resolve_size;
|
btf__resolve_size;
|
||||||
@ -127,48 +70,24 @@ LIBBPF_0.0.1 {
|
|||||||
|
|
||||||
LIBBPF_0.0.2 {
|
LIBBPF_0.0.2 {
|
||||||
global:
|
global:
|
||||||
bpf_probe_helper;
|
|
||||||
bpf_probe_map_type;
|
|
||||||
bpf_probe_prog_type;
|
|
||||||
bpf_map__resize;
|
|
||||||
bpf_map_lookup_elem_flags;
|
bpf_map_lookup_elem_flags;
|
||||||
bpf_object__btf;
|
bpf_object__btf;
|
||||||
bpf_object__find_map_fd_by_name;
|
bpf_object__find_map_fd_by_name;
|
||||||
bpf_get_link_xdp_id;
|
|
||||||
btf__dedup;
|
|
||||||
btf__get_map_kv_tids;
|
|
||||||
btf__get_nr_types;
|
|
||||||
btf__get_raw_data;
|
btf__get_raw_data;
|
||||||
btf__load;
|
|
||||||
btf_ext__free;
|
btf_ext__free;
|
||||||
btf_ext__func_info_rec_size;
|
|
||||||
btf_ext__get_raw_data;
|
btf_ext__get_raw_data;
|
||||||
btf_ext__line_info_rec_size;
|
|
||||||
btf_ext__new;
|
btf_ext__new;
|
||||||
btf_ext__reloc_func_info;
|
|
||||||
btf_ext__reloc_line_info;
|
|
||||||
xsk_umem__create;
|
|
||||||
xsk_socket__create;
|
|
||||||
xsk_umem__delete;
|
|
||||||
xsk_socket__delete;
|
|
||||||
xsk_umem__fd;
|
|
||||||
xsk_socket__fd;
|
|
||||||
bpf_program__get_prog_info_linear;
|
|
||||||
bpf_program__bpil_addr_to_offs;
|
|
||||||
bpf_program__bpil_offs_to_addr;
|
|
||||||
} LIBBPF_0.0.1;
|
} LIBBPF_0.0.1;
|
||||||
|
|
||||||
LIBBPF_0.0.3 {
|
LIBBPF_0.0.3 {
|
||||||
global:
|
global:
|
||||||
bpf_map__is_internal;
|
bpf_map__is_internal;
|
||||||
bpf_map_freeze;
|
bpf_map_freeze;
|
||||||
btf__finalize_data;
|
|
||||||
} LIBBPF_0.0.2;
|
} LIBBPF_0.0.2;
|
||||||
|
|
||||||
LIBBPF_0.0.4 {
|
LIBBPF_0.0.4 {
|
||||||
global:
|
global:
|
||||||
bpf_link__destroy;
|
bpf_link__destroy;
|
||||||
bpf_object__load_xattr;
|
|
||||||
bpf_program__attach_kprobe;
|
bpf_program__attach_kprobe;
|
||||||
bpf_program__attach_perf_event;
|
bpf_program__attach_perf_event;
|
||||||
bpf_program__attach_raw_tracepoint;
|
bpf_program__attach_raw_tracepoint;
|
||||||
@ -176,14 +95,10 @@ LIBBPF_0.0.4 {
|
|||||||
bpf_program__attach_uprobe;
|
bpf_program__attach_uprobe;
|
||||||
btf_dump__dump_type;
|
btf_dump__dump_type;
|
||||||
btf_dump__free;
|
btf_dump__free;
|
||||||
btf_dump__new;
|
|
||||||
btf__parse_elf;
|
btf__parse_elf;
|
||||||
libbpf_num_possible_cpus;
|
libbpf_num_possible_cpus;
|
||||||
perf_buffer__free;
|
perf_buffer__free;
|
||||||
perf_buffer__new;
|
|
||||||
perf_buffer__new_raw;
|
|
||||||
perf_buffer__poll;
|
perf_buffer__poll;
|
||||||
xsk_umem__create;
|
|
||||||
} LIBBPF_0.0.3;
|
} LIBBPF_0.0.3;
|
||||||
|
|
||||||
LIBBPF_0.0.5 {
|
LIBBPF_0.0.5 {
|
||||||
@ -193,7 +108,6 @@ LIBBPF_0.0.5 {
|
|||||||
|
|
||||||
LIBBPF_0.0.6 {
|
LIBBPF_0.0.6 {
|
||||||
global:
|
global:
|
||||||
bpf_get_link_xdp_info;
|
|
||||||
bpf_map__get_pin_path;
|
bpf_map__get_pin_path;
|
||||||
bpf_map__is_pinned;
|
bpf_map__is_pinned;
|
||||||
bpf_map__set_pin_path;
|
bpf_map__set_pin_path;
|
||||||
@ -202,9 +116,6 @@ LIBBPF_0.0.6 {
|
|||||||
bpf_program__attach_trace;
|
bpf_program__attach_trace;
|
||||||
bpf_program__get_expected_attach_type;
|
bpf_program__get_expected_attach_type;
|
||||||
bpf_program__get_type;
|
bpf_program__get_type;
|
||||||
bpf_program__is_tracing;
|
|
||||||
bpf_program__set_tracing;
|
|
||||||
bpf_program__size;
|
|
||||||
btf__find_by_name_kind;
|
btf__find_by_name_kind;
|
||||||
libbpf_find_vmlinux_btf_id;
|
libbpf_find_vmlinux_btf_id;
|
||||||
} LIBBPF_0.0.5;
|
} LIBBPF_0.0.5;
|
||||||
@ -224,14 +135,8 @@ LIBBPF_0.0.7 {
|
|||||||
bpf_object__detach_skeleton;
|
bpf_object__detach_skeleton;
|
||||||
bpf_object__load_skeleton;
|
bpf_object__load_skeleton;
|
||||||
bpf_object__open_skeleton;
|
bpf_object__open_skeleton;
|
||||||
bpf_probe_large_insn_limit;
|
|
||||||
bpf_prog_attach_xattr;
|
|
||||||
bpf_program__attach;
|
bpf_program__attach;
|
||||||
bpf_program__name;
|
bpf_program__name;
|
||||||
bpf_program__is_extension;
|
|
||||||
bpf_program__is_struct_ops;
|
|
||||||
bpf_program__set_extension;
|
|
||||||
bpf_program__set_struct_ops;
|
|
||||||
btf__align_of;
|
btf__align_of;
|
||||||
libbpf_find_kernel_btf;
|
libbpf_find_kernel_btf;
|
||||||
} LIBBPF_0.0.6;
|
} LIBBPF_0.0.6;
|
||||||
@ -250,10 +155,7 @@ LIBBPF_0.0.8 {
|
|||||||
bpf_prog_attach_opts;
|
bpf_prog_attach_opts;
|
||||||
bpf_program__attach_cgroup;
|
bpf_program__attach_cgroup;
|
||||||
bpf_program__attach_lsm;
|
bpf_program__attach_lsm;
|
||||||
bpf_program__is_lsm;
|
|
||||||
bpf_program__set_attach_target;
|
bpf_program__set_attach_target;
|
||||||
bpf_program__set_lsm;
|
|
||||||
bpf_set_link_xdp_fd_opts;
|
|
||||||
} LIBBPF_0.0.7;
|
} LIBBPF_0.0.7;
|
||||||
|
|
||||||
LIBBPF_0.0.9 {
|
LIBBPF_0.0.9 {
|
||||||
@ -291,9 +193,7 @@ LIBBPF_0.1.0 {
|
|||||||
bpf_map__value_size;
|
bpf_map__value_size;
|
||||||
bpf_program__attach_xdp;
|
bpf_program__attach_xdp;
|
||||||
bpf_program__autoload;
|
bpf_program__autoload;
|
||||||
bpf_program__is_sk_lookup;
|
|
||||||
bpf_program__set_autoload;
|
bpf_program__set_autoload;
|
||||||
bpf_program__set_sk_lookup;
|
|
||||||
btf__parse;
|
btf__parse;
|
||||||
btf__parse_raw;
|
btf__parse_raw;
|
||||||
btf__pointer_size;
|
btf__pointer_size;
|
||||||
@ -336,7 +236,6 @@ LIBBPF_0.2.0 {
|
|||||||
perf_buffer__buffer_fd;
|
perf_buffer__buffer_fd;
|
||||||
perf_buffer__epoll_fd;
|
perf_buffer__epoll_fd;
|
||||||
perf_buffer__consume_buffer;
|
perf_buffer__consume_buffer;
|
||||||
xsk_socket__create_shared;
|
|
||||||
} LIBBPF_0.1.0;
|
} LIBBPF_0.1.0;
|
||||||
|
|
||||||
LIBBPF_0.3.0 {
|
LIBBPF_0.3.0 {
|
||||||
@ -348,8 +247,6 @@ LIBBPF_0.3.0 {
|
|||||||
btf__new_empty_split;
|
btf__new_empty_split;
|
||||||
btf__new_split;
|
btf__new_split;
|
||||||
ring_buffer__epoll_fd;
|
ring_buffer__epoll_fd;
|
||||||
xsk_setup_xdp_prog;
|
|
||||||
xsk_socket__update_xskmap;
|
|
||||||
} LIBBPF_0.2.0;
|
} LIBBPF_0.2.0;
|
||||||
|
|
||||||
LIBBPF_0.4.0 {
|
LIBBPF_0.4.0 {
|
||||||
@ -397,7 +294,6 @@ LIBBPF_0.6.0 {
|
|||||||
bpf_object__next_program;
|
bpf_object__next_program;
|
||||||
bpf_object__prev_map;
|
bpf_object__prev_map;
|
||||||
bpf_object__prev_program;
|
bpf_object__prev_program;
|
||||||
bpf_prog_load_deprecated;
|
|
||||||
bpf_prog_load;
|
bpf_prog_load;
|
||||||
bpf_program__flags;
|
bpf_program__flags;
|
||||||
bpf_program__insn_cnt;
|
bpf_program__insn_cnt;
|
||||||
@ -407,18 +303,14 @@ LIBBPF_0.6.0 {
|
|||||||
btf__add_decl_tag;
|
btf__add_decl_tag;
|
||||||
btf__add_type_tag;
|
btf__add_type_tag;
|
||||||
btf__dedup;
|
btf__dedup;
|
||||||
btf__dedup_deprecated;
|
|
||||||
btf__raw_data;
|
btf__raw_data;
|
||||||
btf__type_cnt;
|
btf__type_cnt;
|
||||||
btf_dump__new;
|
btf_dump__new;
|
||||||
btf_dump__new_deprecated;
|
|
||||||
libbpf_major_version;
|
libbpf_major_version;
|
||||||
libbpf_minor_version;
|
libbpf_minor_version;
|
||||||
libbpf_version_string;
|
libbpf_version_string;
|
||||||
perf_buffer__new;
|
perf_buffer__new;
|
||||||
perf_buffer__new_deprecated;
|
|
||||||
perf_buffer__new_raw;
|
perf_buffer__new_raw;
|
||||||
perf_buffer__new_raw_deprecated;
|
|
||||||
} LIBBPF_0.5.0;
|
} LIBBPF_0.5.0;
|
||||||
|
|
||||||
LIBBPF_0.7.0 {
|
LIBBPF_0.7.0 {
|
||||||
@ -434,8 +326,59 @@ LIBBPF_0.7.0 {
|
|||||||
bpf_xdp_detach;
|
bpf_xdp_detach;
|
||||||
bpf_xdp_query;
|
bpf_xdp_query;
|
||||||
bpf_xdp_query_id;
|
bpf_xdp_query_id;
|
||||||
|
btf_ext__raw_data;
|
||||||
libbpf_probe_bpf_helper;
|
libbpf_probe_bpf_helper;
|
||||||
libbpf_probe_bpf_map_type;
|
libbpf_probe_bpf_map_type;
|
||||||
libbpf_probe_bpf_prog_type;
|
libbpf_probe_bpf_prog_type;
|
||||||
libbpf_set_memlock_rlim_max;
|
libbpf_set_memlock_rlim;
|
||||||
} LIBBPF_0.6.0;
|
} LIBBPF_0.6.0;
|
||||||
|
|
||||||
|
LIBBPF_0.8.0 {
|
||||||
|
global:
|
||||||
|
bpf_map__autocreate;
|
||||||
|
bpf_map__get_next_key;
|
||||||
|
bpf_map__delete_elem;
|
||||||
|
bpf_map__lookup_and_delete_elem;
|
||||||
|
bpf_map__lookup_elem;
|
||||||
|
bpf_map__set_autocreate;
|
||||||
|
bpf_map__update_elem;
|
||||||
|
bpf_map_delete_elem_flags;
|
||||||
|
bpf_object__destroy_subskeleton;
|
||||||
|
bpf_object__open_subskeleton;
|
||||||
|
bpf_program__attach_kprobe_multi_opts;
|
||||||
|
bpf_program__attach_trace_opts;
|
||||||
|
bpf_program__attach_usdt;
|
||||||
|
bpf_program__set_insns;
|
||||||
|
libbpf_register_prog_handler;
|
||||||
|
libbpf_unregister_prog_handler;
|
||||||
|
} LIBBPF_0.7.0;
|
||||||
|
|
||||||
|
LIBBPF_1.0.0 {
|
||||||
|
global:
|
||||||
|
bpf_obj_get_opts;
|
||||||
|
bpf_prog_query_opts;
|
||||||
|
bpf_program__attach_ksyscall;
|
||||||
|
bpf_program__autoattach;
|
||||||
|
bpf_program__set_autoattach;
|
||||||
|
btf__add_enum64;
|
||||||
|
btf__add_enum64_value;
|
||||||
|
libbpf_bpf_attach_type_str;
|
||||||
|
libbpf_bpf_link_type_str;
|
||||||
|
libbpf_bpf_map_type_str;
|
||||||
|
libbpf_bpf_prog_type_str;
|
||||||
|
perf_buffer__buffer;
|
||||||
|
} LIBBPF_0.8.0;
|
||||||
|
|
||||||
|
LIBBPF_1.1.0 {
|
||||||
|
global:
|
||||||
|
bpf_btf_get_fd_by_id_opts;
|
||||||
|
bpf_link_get_fd_by_id_opts;
|
||||||
|
bpf_map_get_fd_by_id_opts;
|
||||||
|
bpf_prog_get_fd_by_id_opts;
|
||||||
|
user_ring_buffer__discard;
|
||||||
|
user_ring_buffer__free;
|
||||||
|
user_ring_buffer__new;
|
||||||
|
user_ring_buffer__reserve;
|
||||||
|
user_ring_buffer__reserve_blocking;
|
||||||
|
user_ring_buffer__submit;
|
||||||
|
} LIBBPF_1.0.0;
|
||||||
|
@ -30,20 +30,10 @@
|
|||||||
/* Add checks for other versions below when planning deprecation of API symbols
|
/* Add checks for other versions below when planning deprecation of API symbols
|
||||||
* with the LIBBPF_DEPRECATED_SINCE macro.
|
* with the LIBBPF_DEPRECATED_SINCE macro.
|
||||||
*/
|
*/
|
||||||
#if __LIBBPF_CURRENT_VERSION_GEQ(0, 6)
|
#if __LIBBPF_CURRENT_VERSION_GEQ(1, 0)
|
||||||
#define __LIBBPF_MARK_DEPRECATED_0_6(X) X
|
#define __LIBBPF_MARK_DEPRECATED_1_0(X) X
|
||||||
#else
|
#else
|
||||||
#define __LIBBPF_MARK_DEPRECATED_0_6(X)
|
#define __LIBBPF_MARK_DEPRECATED_1_0(X)
|
||||||
#endif
|
|
||||||
#if __LIBBPF_CURRENT_VERSION_GEQ(0, 7)
|
|
||||||
#define __LIBBPF_MARK_DEPRECATED_0_7(X) X
|
|
||||||
#else
|
|
||||||
#define __LIBBPF_MARK_DEPRECATED_0_7(X)
|
|
||||||
#endif
|
|
||||||
#if __LIBBPF_CURRENT_VERSION_GEQ(0, 8)
|
|
||||||
#define __LIBBPF_MARK_DEPRECATED_0_8(X) X
|
|
||||||
#else
|
|
||||||
#define __LIBBPF_MARK_DEPRECATED_0_8(X)
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/* This set of internal macros allows to do "function overloading" based on
|
/* This set of internal macros allows to do "function overloading" based on
|
||||||
|
@ -39,14 +39,14 @@ static const char *libbpf_strerror_table[NR_ERRNO] = {
|
|||||||
|
|
||||||
int libbpf_strerror(int err, char *buf, size_t size)
|
int libbpf_strerror(int err, char *buf, size_t size)
|
||||||
{
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
if (!buf || !size)
|
if (!buf || !size)
|
||||||
return libbpf_err(-EINVAL);
|
return libbpf_err(-EINVAL);
|
||||||
|
|
||||||
err = err > 0 ? err : -err;
|
err = err > 0 ? err : -err;
|
||||||
|
|
||||||
if (err < __LIBBPF_ERRNO__START) {
|
if (err < __LIBBPF_ERRNO__START) {
|
||||||
int ret;
|
|
||||||
|
|
||||||
ret = strerror_r(err, buf, size);
|
ret = strerror_r(err, buf, size);
|
||||||
buf[size - 1] = '\0';
|
buf[size - 1] = '\0';
|
||||||
return libbpf_err_errno(ret);
|
return libbpf_err_errno(ret);
|
||||||
@ -56,12 +56,20 @@ int libbpf_strerror(int err, char *buf, size_t size)
|
|||||||
const char *msg;
|
const char *msg;
|
||||||
|
|
||||||
msg = libbpf_strerror_table[ERRNO_OFFSET(err)];
|
msg = libbpf_strerror_table[ERRNO_OFFSET(err)];
|
||||||
snprintf(buf, size, "%s", msg);
|
ret = snprintf(buf, size, "%s", msg);
|
||||||
buf[size - 1] = '\0';
|
buf[size - 1] = '\0';
|
||||||
|
/* The length of the buf and msg is positive.
|
||||||
|
* A negative number may be returned only when the
|
||||||
|
* size exceeds INT_MAX. Not likely to appear.
|
||||||
|
*/
|
||||||
|
if (ret >= size)
|
||||||
|
return libbpf_err(-ERANGE);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
snprintf(buf, size, "Unknown libbpf error %d", err);
|
ret = snprintf(buf, size, "Unknown libbpf error %d", err);
|
||||||
buf[size - 1] = '\0';
|
buf[size - 1] = '\0';
|
||||||
|
if (ret >= size)
|
||||||
|
return libbpf_err(-ERANGE);
|
||||||
return libbpf_err(-ENOENT);
|
return libbpf_err(-ENOENT);
|
||||||
}
|
}
|
||||||
|
@ -15,7 +15,6 @@
|
|||||||
#include <linux/err.h>
|
#include <linux/err.h>
|
||||||
#include <fcntl.h>
|
#include <fcntl.h>
|
||||||
#include <unistd.h>
|
#include <unistd.h>
|
||||||
#include "libbpf_legacy.h"
|
|
||||||
#include "relo_core.h"
|
#include "relo_core.h"
|
||||||
|
|
||||||
/* make sure libbpf doesn't use kernel-only integer typedefs */
|
/* make sure libbpf doesn't use kernel-only integer typedefs */
|
||||||
@ -103,6 +102,17 @@
|
|||||||
#define str_has_pfx(str, pfx) \
|
#define str_has_pfx(str, pfx) \
|
||||||
(strncmp(str, pfx, __builtin_constant_p(pfx) ? sizeof(pfx) - 1 : strlen(pfx)) == 0)
|
(strncmp(str, pfx, __builtin_constant_p(pfx) ? sizeof(pfx) - 1 : strlen(pfx)) == 0)
|
||||||
|
|
||||||
|
/* suffix check */
|
||||||
|
static inline bool str_has_sfx(const char *str, const char *sfx)
|
||||||
|
{
|
||||||
|
size_t str_len = strlen(str);
|
||||||
|
size_t sfx_len = strlen(sfx);
|
||||||
|
|
||||||
|
if (sfx_len > str_len)
|
||||||
|
return false;
|
||||||
|
return strcmp(str + str_len - sfx_len, sfx) == 0;
|
||||||
|
}
|
||||||
|
|
||||||
/* Symbol versioning is different between static and shared library.
|
/* Symbol versioning is different between static and shared library.
|
||||||
* Properly versioned symbols are needed for shared library, but
|
* Properly versioned symbols are needed for shared library, but
|
||||||
* only the symbol of the new version is needed for static library.
|
* only the symbol of the new version is needed for static library.
|
||||||
@ -148,6 +158,15 @@ do { \
|
|||||||
#ifndef __has_builtin
|
#ifndef __has_builtin
|
||||||
#define __has_builtin(x) 0
|
#define __has_builtin(x) 0
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
struct bpf_link {
|
||||||
|
int (*detach)(struct bpf_link *link);
|
||||||
|
void (*dealloc)(struct bpf_link *link);
|
||||||
|
char *pin_path; /* NULL, if not pinned */
|
||||||
|
int fd; /* hook FD, -1 if not applicable */
|
||||||
|
bool disconnected;
|
||||||
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Re-implement glibc's reallocarray() for libbpf internal-only use.
|
* Re-implement glibc's reallocarray() for libbpf internal-only use.
|
||||||
* reallocarray(), unfortunately, is not available in all versions of glibc,
|
* reallocarray(), unfortunately, is not available in all versions of glibc,
|
||||||
@ -329,6 +348,12 @@ enum kern_feature_id {
|
|||||||
FEAT_BTF_TYPE_TAG,
|
FEAT_BTF_TYPE_TAG,
|
||||||
/* memcg-based accounting for BPF maps and progs */
|
/* memcg-based accounting for BPF maps and progs */
|
||||||
FEAT_MEMCG_ACCOUNT,
|
FEAT_MEMCG_ACCOUNT,
|
||||||
|
/* BPF cookie (bpf_get_attach_cookie() BPF helper) support */
|
||||||
|
FEAT_BPF_COOKIE,
|
||||||
|
/* BTF_KIND_ENUM64 support and BTF_KIND_ENUM kflag support */
|
||||||
|
FEAT_BTF_ENUM64,
|
||||||
|
/* Kernel uses syscall wrapper (CONFIG_ARCH_HAS_SYSCALL_WRAPPER) */
|
||||||
|
FEAT_SYSCALL_WRAPPER,
|
||||||
__FEAT_CNT,
|
__FEAT_CNT,
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -354,6 +379,13 @@ struct btf_ext_info {
|
|||||||
void *info;
|
void *info;
|
||||||
__u32 rec_size;
|
__u32 rec_size;
|
||||||
__u32 len;
|
__u32 len;
|
||||||
|
/* optional (maintained internally by libbpf) mapping between .BTF.ext
|
||||||
|
* section and corresponding ELF section. This is used to join
|
||||||
|
* information like CO-RE relocation records with corresponding BPF
|
||||||
|
* programs defined in ELF sections
|
||||||
|
*/
|
||||||
|
__u32 *sec_idxs;
|
||||||
|
int sec_cnt;
|
||||||
};
|
};
|
||||||
|
|
||||||
#define for_each_btf_ext_sec(seg, sec) \
|
#define for_each_btf_ext_sec(seg, sec) \
|
||||||
@ -447,7 +479,10 @@ int btf_ext_visit_str_offs(struct btf_ext *btf_ext, str_off_visit_fn visit, void
|
|||||||
__s32 btf__find_by_name_kind_own(const struct btf *btf, const char *type_name,
|
__s32 btf__find_by_name_kind_own(const struct btf *btf, const char *type_name,
|
||||||
__u32 kind);
|
__u32 kind);
|
||||||
|
|
||||||
extern enum libbpf_strict_mode libbpf_mode;
|
typedef int (*kallsyms_cb_t)(unsigned long long sym_addr, char sym_type,
|
||||||
|
const char *sym_name, void *ctx);
|
||||||
|
|
||||||
|
int libbpf_kallsyms_parse(kallsyms_cb_t cb, void *arg);
|
||||||
|
|
||||||
/* handle direct returned errors */
|
/* handle direct returned errors */
|
||||||
static inline int libbpf_err(int ret)
|
static inline int libbpf_err(int ret)
|
||||||
@ -462,12 +497,8 @@ static inline int libbpf_err(int ret)
|
|||||||
*/
|
*/
|
||||||
static inline int libbpf_err_errno(int ret)
|
static inline int libbpf_err_errno(int ret)
|
||||||
{
|
{
|
||||||
if (libbpf_mode & LIBBPF_STRICT_DIRECT_ERRS)
|
/* errno is already assumed to be set on error */
|
||||||
/* errno is already assumed to be set on error */
|
return ret < 0 ? -errno : ret;
|
||||||
return ret < 0 ? -errno : ret;
|
|
||||||
|
|
||||||
/* legacy: on error return -1 directly and don't touch errno */
|
|
||||||
return ret;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* handle error for pointer-returning APIs, err is assumed to be < 0 always */
|
/* handle error for pointer-returning APIs, err is assumed to be < 0 always */
|
||||||
@ -475,12 +506,7 @@ static inline void *libbpf_err_ptr(int err)
|
|||||||
{
|
{
|
||||||
/* set errno on error, this doesn't break anything */
|
/* set errno on error, this doesn't break anything */
|
||||||
errno = -err;
|
errno = -err;
|
||||||
|
return NULL;
|
||||||
if (libbpf_mode & LIBBPF_STRICT_CLEAN_PTRS)
|
|
||||||
return NULL;
|
|
||||||
|
|
||||||
/* legacy: encode err as ptr */
|
|
||||||
return ERR_PTR(err);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* handle pointer-returning APIs' error handling */
|
/* handle pointer-returning APIs' error handling */
|
||||||
@ -490,11 +516,7 @@ static inline void *libbpf_ptr(void *ret)
|
|||||||
if (IS_ERR(ret))
|
if (IS_ERR(ret))
|
||||||
errno = -PTR_ERR(ret);
|
errno = -PTR_ERR(ret);
|
||||||
|
|
||||||
if (libbpf_mode & LIBBPF_STRICT_CLEAN_PTRS)
|
return IS_ERR(ret) ? NULL : ret;
|
||||||
return IS_ERR(ret) ? NULL : ret;
|
|
||||||
|
|
||||||
/* legacy: pass-through original pointer */
|
|
||||||
return ret;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool str_is_empty(const char *s)
|
static inline bool str_is_empty(const char *s)
|
||||||
@ -529,4 +551,29 @@ static inline int ensure_good_fd(int fd)
|
|||||||
return fd;
|
return fd;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* The following two functions are exposed to bpftool */
|
||||||
|
int bpf_core_add_cands(struct bpf_core_cand *local_cand,
|
||||||
|
size_t local_essent_len,
|
||||||
|
const struct btf *targ_btf,
|
||||||
|
const char *targ_btf_name,
|
||||||
|
int targ_start_id,
|
||||||
|
struct bpf_core_cand_list *cands);
|
||||||
|
void bpf_core_free_cands(struct bpf_core_cand_list *cands);
|
||||||
|
|
||||||
|
struct usdt_manager *usdt_manager_new(struct bpf_object *obj);
|
||||||
|
void usdt_manager_free(struct usdt_manager *man);
|
||||||
|
struct bpf_link * usdt_manager_attach_usdt(struct usdt_manager *man,
|
||||||
|
const struct bpf_program *prog,
|
||||||
|
pid_t pid, const char *path,
|
||||||
|
const char *usdt_provider, const char *usdt_name,
|
||||||
|
__u64 usdt_cookie);
|
||||||
|
|
||||||
|
static inline bool is_pow_of_2(size_t x)
|
||||||
|
{
|
||||||
|
return x && (x & (x - 1)) == 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
#define PROG_LOAD_ATTEMPTS 5
|
||||||
|
int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts);
|
||||||
|
|
||||||
#endif /* __LIBBPF_LIBBPF_INTERNAL_H */
|
#endif /* __LIBBPF_LIBBPF_INTERNAL_H */
|
||||||
|
@ -20,6 +20,11 @@
|
|||||||
extern "C" {
|
extern "C" {
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
/* As of libbpf 1.0 libbpf_set_strict_mode() and enum libbpf_struct_mode have
|
||||||
|
* no effect. But they are left in libbpf_legacy.h so that applications that
|
||||||
|
* prepared for libbpf 1.0 before final release by using
|
||||||
|
* libbpf_set_strict_mode() still work with libbpf 1.0+ without any changes.
|
||||||
|
*/
|
||||||
enum libbpf_strict_mode {
|
enum libbpf_strict_mode {
|
||||||
/* Turn on all supported strict features of libbpf to simulate libbpf
|
/* Turn on all supported strict features of libbpf to simulate libbpf
|
||||||
* v1.0 behavior.
|
* v1.0 behavior.
|
||||||
@ -54,6 +59,10 @@ enum libbpf_strict_mode {
|
|||||||
*
|
*
|
||||||
* Note, in this mode the program pin path will be based on the
|
* Note, in this mode the program pin path will be based on the
|
||||||
* function name instead of section name.
|
* function name instead of section name.
|
||||||
|
*
|
||||||
|
* Additionally, routines in the .text section are always considered
|
||||||
|
* sub-programs. Legacy behavior allows for a single routine in .text
|
||||||
|
* to be a program.
|
||||||
*/
|
*/
|
||||||
LIBBPF_STRICT_SEC_NAME = 0x04,
|
LIBBPF_STRICT_SEC_NAME = 0x04,
|
||||||
/*
|
/*
|
||||||
@ -67,8 +76,8 @@ enum libbpf_strict_mode {
|
|||||||
* first BPF program or map creation operation. This is done only if
|
* first BPF program or map creation operation. This is done only if
|
||||||
* kernel is too old to support memcg-based memory accounting for BPF
|
* kernel is too old to support memcg-based memory accounting for BPF
|
||||||
* subsystem. By default, RLIMIT_MEMLOCK limit is set to RLIM_INFINITY,
|
* subsystem. By default, RLIMIT_MEMLOCK limit is set to RLIM_INFINITY,
|
||||||
* but it can be overriden with libbpf_set_memlock_rlim_max() API.
|
* but it can be overriden with libbpf_set_memlock_rlim() API.
|
||||||
* Note that libbpf_set_memlock_rlim_max() needs to be called before
|
* Note that libbpf_set_memlock_rlim() needs to be called before
|
||||||
* the very first bpf_prog_load(), bpf_map_create() or bpf_object__load()
|
* the very first bpf_prog_load(), bpf_map_create() or bpf_object__load()
|
||||||
* operation.
|
* operation.
|
||||||
*/
|
*/
|
||||||
@ -84,6 +93,25 @@ enum libbpf_strict_mode {
|
|||||||
|
|
||||||
LIBBPF_API int libbpf_set_strict_mode(enum libbpf_strict_mode mode);
|
LIBBPF_API int libbpf_set_strict_mode(enum libbpf_strict_mode mode);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @brief **libbpf_get_error()** extracts the error code from the passed
|
||||||
|
* pointer
|
||||||
|
* @param ptr pointer returned from libbpf API function
|
||||||
|
* @return error code; or 0 if no error occured
|
||||||
|
*
|
||||||
|
* Note, as of libbpf 1.0 this function is not necessary and not recommended
|
||||||
|
* to be used. Libbpf doesn't return error code embedded into the pointer
|
||||||
|
* itself. Instead, NULL is returned on error and error code is passed through
|
||||||
|
* thread-local errno variable. **libbpf_get_error()** is just returning -errno
|
||||||
|
* value if it receives NULL, which is correct only if errno hasn't been
|
||||||
|
* modified between libbpf API call and corresponding **libbpf_get_error()**
|
||||||
|
* call. Prefer to check return for NULL and use errno directly.
|
||||||
|
*
|
||||||
|
* This API is left in libbpf 1.0 to allow applications that were 1.0-ready
|
||||||
|
* before final libbpf 1.0 without needing to change them.
|
||||||
|
*/
|
||||||
|
LIBBPF_API long libbpf_get_error(const void *ptr);
|
||||||
|
|
||||||
#define DECLARE_LIBBPF_OPTS LIBBPF_OPTS
|
#define DECLARE_LIBBPF_OPTS LIBBPF_OPTS
|
||||||
|
|
||||||
/* "Discouraged" APIs which don't follow consistent libbpf naming patterns.
|
/* "Discouraged" APIs which don't follow consistent libbpf naming patterns.
|
||||||
@ -97,6 +125,8 @@ struct bpf_map;
|
|||||||
struct btf;
|
struct btf;
|
||||||
struct btf_ext;
|
struct btf_ext;
|
||||||
|
|
||||||
|
LIBBPF_API struct btf *libbpf_find_kernel_btf(void);
|
||||||
|
|
||||||
LIBBPF_API enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog);
|
LIBBPF_API enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog);
|
||||||
LIBBPF_API enum bpf_attach_type bpf_program__get_expected_attach_type(const struct bpf_program *prog);
|
LIBBPF_API enum bpf_attach_type bpf_program__get_expected_attach_type(const struct bpf_program *prog);
|
||||||
LIBBPF_API const char *bpf_map__get_pin_path(const struct bpf_map *map);
|
LIBBPF_API const char *bpf_map__get_pin_path(const struct bpf_map *map);
|
||||||
|
@ -17,47 +17,14 @@
|
|||||||
#include "libbpf.h"
|
#include "libbpf.h"
|
||||||
#include "libbpf_internal.h"
|
#include "libbpf_internal.h"
|
||||||
|
|
||||||
static bool grep(const char *buffer, const char *pattern)
|
|
||||||
{
|
|
||||||
return !!strstr(buffer, pattern);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int get_vendor_id(int ifindex)
|
|
||||||
{
|
|
||||||
char ifname[IF_NAMESIZE], path[64], buf[8];
|
|
||||||
ssize_t len;
|
|
||||||
int fd;
|
|
||||||
|
|
||||||
if (!if_indextoname(ifindex, ifname))
|
|
||||||
return -1;
|
|
||||||
|
|
||||||
snprintf(path, sizeof(path), "/sys/class/net/%s/device/vendor", ifname);
|
|
||||||
|
|
||||||
fd = open(path, O_RDONLY | O_CLOEXEC);
|
|
||||||
if (fd < 0)
|
|
||||||
return -1;
|
|
||||||
|
|
||||||
len = read(fd, buf, sizeof(buf));
|
|
||||||
close(fd);
|
|
||||||
if (len < 0)
|
|
||||||
return -1;
|
|
||||||
if (len >= (ssize_t)sizeof(buf))
|
|
||||||
return -1;
|
|
||||||
buf[len] = '\0';
|
|
||||||
|
|
||||||
return strtol(buf, NULL, 0);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int probe_prog_load(enum bpf_prog_type prog_type,
|
static int probe_prog_load(enum bpf_prog_type prog_type,
|
||||||
const struct bpf_insn *insns, size_t insns_cnt,
|
const struct bpf_insn *insns, size_t insns_cnt,
|
||||||
char *log_buf, size_t log_buf_sz,
|
char *log_buf, size_t log_buf_sz)
|
||||||
__u32 ifindex)
|
|
||||||
{
|
{
|
||||||
LIBBPF_OPTS(bpf_prog_load_opts, opts,
|
LIBBPF_OPTS(bpf_prog_load_opts, opts,
|
||||||
.log_buf = log_buf,
|
.log_buf = log_buf,
|
||||||
.log_size = log_buf_sz,
|
.log_size = log_buf_sz,
|
||||||
.log_level = log_buf ? 1 : 0,
|
.log_level = log_buf ? 1 : 0,
|
||||||
.prog_ifindex = ifindex,
|
|
||||||
);
|
);
|
||||||
int fd, err, exp_err = 0;
|
int fd, err, exp_err = 0;
|
||||||
const char *exp_msg = NULL;
|
const char *exp_msg = NULL;
|
||||||
@ -161,31 +128,10 @@ int libbpf_probe_bpf_prog_type(enum bpf_prog_type prog_type, const void *opts)
|
|||||||
if (opts)
|
if (opts)
|
||||||
return libbpf_err(-EINVAL);
|
return libbpf_err(-EINVAL);
|
||||||
|
|
||||||
ret = probe_prog_load(prog_type, insns, insn_cnt, NULL, 0, 0);
|
ret = probe_prog_load(prog_type, insns, insn_cnt, NULL, 0);
|
||||||
return libbpf_err(ret);
|
return libbpf_err(ret);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool bpf_probe_prog_type(enum bpf_prog_type prog_type, __u32 ifindex)
|
|
||||||
{
|
|
||||||
struct bpf_insn insns[2] = {
|
|
||||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
|
||||||
BPF_EXIT_INSN()
|
|
||||||
};
|
|
||||||
|
|
||||||
/* prefer libbpf_probe_bpf_prog_type() unless offload is requested */
|
|
||||||
if (ifindex == 0)
|
|
||||||
return libbpf_probe_bpf_prog_type(prog_type, NULL) == 1;
|
|
||||||
|
|
||||||
if (ifindex && prog_type == BPF_PROG_TYPE_SCHED_CLS)
|
|
||||||
/* nfp returns -EINVAL on exit(0) with TC offload */
|
|
||||||
insns[0].imm = 2;
|
|
||||||
|
|
||||||
errno = 0;
|
|
||||||
probe_prog_load(prog_type, insns, ARRAY_SIZE(insns), NULL, 0, ifindex);
|
|
||||||
|
|
||||||
return errno != EINVAL && errno != EOPNOTSUPP;
|
|
||||||
}
|
|
||||||
|
|
||||||
int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
|
int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
|
||||||
const char *str_sec, size_t str_len)
|
const char *str_sec, size_t str_len)
|
||||||
{
|
{
|
||||||
@ -242,14 +188,12 @@ static int load_local_storage_btf(void)
|
|||||||
strs, sizeof(strs));
|
strs, sizeof(strs));
|
||||||
}
|
}
|
||||||
|
|
||||||
static int probe_map_create(enum bpf_map_type map_type, __u32 ifindex)
|
static int probe_map_create(enum bpf_map_type map_type)
|
||||||
{
|
{
|
||||||
LIBBPF_OPTS(bpf_map_create_opts, opts);
|
LIBBPF_OPTS(bpf_map_create_opts, opts);
|
||||||
int key_size, value_size, max_entries;
|
int key_size, value_size, max_entries;
|
||||||
__u32 btf_key_type_id = 0, btf_value_type_id = 0;
|
__u32 btf_key_type_id = 0, btf_value_type_id = 0;
|
||||||
int fd = -1, btf_fd = -1, fd_inner = -1, exp_err = 0, err;
|
int fd = -1, btf_fd = -1, fd_inner = -1, exp_err = 0, err = 0;
|
||||||
|
|
||||||
opts.map_ifindex = ifindex;
|
|
||||||
|
|
||||||
key_size = sizeof(__u32);
|
key_size = sizeof(__u32);
|
||||||
value_size = sizeof(__u32);
|
value_size = sizeof(__u32);
|
||||||
@ -277,6 +221,7 @@ static int probe_map_create(enum bpf_map_type map_type, __u32 ifindex)
|
|||||||
case BPF_MAP_TYPE_SK_STORAGE:
|
case BPF_MAP_TYPE_SK_STORAGE:
|
||||||
case BPF_MAP_TYPE_INODE_STORAGE:
|
case BPF_MAP_TYPE_INODE_STORAGE:
|
||||||
case BPF_MAP_TYPE_TASK_STORAGE:
|
case BPF_MAP_TYPE_TASK_STORAGE:
|
||||||
|
case BPF_MAP_TYPE_CGRP_STORAGE:
|
||||||
btf_key_type_id = 1;
|
btf_key_type_id = 1;
|
||||||
btf_value_type_id = 3;
|
btf_value_type_id = 3;
|
||||||
value_size = 8;
|
value_size = 8;
|
||||||
@ -287,9 +232,10 @@ static int probe_map_create(enum bpf_map_type map_type, __u32 ifindex)
|
|||||||
return btf_fd;
|
return btf_fd;
|
||||||
break;
|
break;
|
||||||
case BPF_MAP_TYPE_RINGBUF:
|
case BPF_MAP_TYPE_RINGBUF:
|
||||||
|
case BPF_MAP_TYPE_USER_RINGBUF:
|
||||||
key_size = 0;
|
key_size = 0;
|
||||||
value_size = 0;
|
value_size = 0;
|
||||||
max_entries = 4096;
|
max_entries = sysconf(_SC_PAGE_SIZE);
|
||||||
break;
|
break;
|
||||||
case BPF_MAP_TYPE_STRUCT_OPS:
|
case BPF_MAP_TYPE_STRUCT_OPS:
|
||||||
/* we'll get -ENOTSUPP for invalid BTF type ID for struct_ops */
|
/* we'll get -ENOTSUPP for invalid BTF type ID for struct_ops */
|
||||||
@ -326,12 +272,6 @@ static int probe_map_create(enum bpf_map_type map_type, __u32 ifindex)
|
|||||||
|
|
||||||
if (map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS ||
|
if (map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS ||
|
||||||
map_type == BPF_MAP_TYPE_HASH_OF_MAPS) {
|
map_type == BPF_MAP_TYPE_HASH_OF_MAPS) {
|
||||||
/* TODO: probe for device, once libbpf has a function to create
|
|
||||||
* map-in-map for offload
|
|
||||||
*/
|
|
||||||
if (ifindex)
|
|
||||||
goto cleanup;
|
|
||||||
|
|
||||||
fd_inner = bpf_map_create(BPF_MAP_TYPE_HASH, NULL,
|
fd_inner = bpf_map_create(BPF_MAP_TYPE_HASH, NULL,
|
||||||
sizeof(__u32), sizeof(__u32), 1, NULL);
|
sizeof(__u32), sizeof(__u32), 1, NULL);
|
||||||
if (fd_inner < 0)
|
if (fd_inner < 0)
|
||||||
@ -370,15 +310,10 @@ int libbpf_probe_bpf_map_type(enum bpf_map_type map_type, const void *opts)
|
|||||||
if (opts)
|
if (opts)
|
||||||
return libbpf_err(-EINVAL);
|
return libbpf_err(-EINVAL);
|
||||||
|
|
||||||
ret = probe_map_create(map_type, 0);
|
ret = probe_map_create(map_type);
|
||||||
return libbpf_err(ret);
|
return libbpf_err(ret);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex)
|
|
||||||
{
|
|
||||||
return probe_map_create(map_type, ifindex) == 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
int libbpf_probe_bpf_helper(enum bpf_prog_type prog_type, enum bpf_func_id helper_id,
|
int libbpf_probe_bpf_helper(enum bpf_prog_type prog_type, enum bpf_func_id helper_id,
|
||||||
const void *opts)
|
const void *opts)
|
||||||
{
|
{
|
||||||
@ -407,7 +342,7 @@ int libbpf_probe_bpf_helper(enum bpf_prog_type prog_type, enum bpf_func_id helpe
|
|||||||
}
|
}
|
||||||
|
|
||||||
buf[0] = '\0';
|
buf[0] = '\0';
|
||||||
ret = probe_prog_load(prog_type, insns, insn_cnt, buf, sizeof(buf), 0);
|
ret = probe_prog_load(prog_type, insns, insn_cnt, buf, sizeof(buf));
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return libbpf_err(ret);
|
return libbpf_err(ret);
|
||||||
|
|
||||||
@ -427,51 +362,3 @@ int libbpf_probe_bpf_helper(enum bpf_prog_type prog_type, enum bpf_func_id helpe
|
|||||||
return 0;
|
return 0;
|
||||||
return 1; /* assume supported */
|
return 1; /* assume supported */
|
||||||
}
|
}
|
||||||
|
|
||||||
bool bpf_probe_helper(enum bpf_func_id id, enum bpf_prog_type prog_type,
|
|
||||||
__u32 ifindex)
|
|
||||||
{
|
|
||||||
struct bpf_insn insns[2] = {
|
|
||||||
BPF_EMIT_CALL(id),
|
|
||||||
BPF_EXIT_INSN()
|
|
||||||
};
|
|
||||||
char buf[4096] = {};
|
|
||||||
bool res;
|
|
||||||
|
|
||||||
probe_prog_load(prog_type, insns, ARRAY_SIZE(insns), buf, sizeof(buf), ifindex);
|
|
||||||
res = !grep(buf, "invalid func ") && !grep(buf, "unknown func ");
|
|
||||||
|
|
||||||
if (ifindex) {
|
|
||||||
switch (get_vendor_id(ifindex)) {
|
|
||||||
case 0x19ee: /* Netronome specific */
|
|
||||||
res = res && !grep(buf, "not supported by FW") &&
|
|
||||||
!grep(buf, "unsupported function id");
|
|
||||||
break;
|
|
||||||
default:
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return res;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Probe for availability of kernel commit (5.3):
|
|
||||||
*
|
|
||||||
* c04c0d2b968a ("bpf: increase complexity limit and maximum program size")
|
|
||||||
*/
|
|
||||||
bool bpf_probe_large_insn_limit(__u32 ifindex)
|
|
||||||
{
|
|
||||||
struct bpf_insn insns[BPF_MAXINSNS + 1];
|
|
||||||
int i;
|
|
||||||
|
|
||||||
for (i = 0; i < BPF_MAXINSNS; i++)
|
|
||||||
insns[i] = BPF_MOV64_IMM(BPF_REG_0, 1);
|
|
||||||
insns[BPF_MAXINSNS] = BPF_EXIT_INSN();
|
|
||||||
|
|
||||||
errno = 0;
|
|
||||||
probe_prog_load(BPF_PROG_TYPE_SCHED_CLS, insns, ARRAY_SIZE(insns), NULL, 0,
|
|
||||||
ifindex);
|
|
||||||
|
|
||||||
return errno != E2BIG && errno != EINVAL;
|
|
||||||
}
|
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
#ifndef __LIBBPF_VERSION_H
|
#ifndef __LIBBPF_VERSION_H
|
||||||
#define __LIBBPF_VERSION_H
|
#define __LIBBPF_VERSION_H
|
||||||
|
|
||||||
#define LIBBPF_MAJOR_VERSION 0
|
#define LIBBPF_MAJOR_VERSION 1
|
||||||
#define LIBBPF_MINOR_VERSION 7
|
#define LIBBPF_MINOR_VERSION 1
|
||||||
|
|
||||||
#endif /* __LIBBPF_VERSION_H */
|
#endif /* __LIBBPF_VERSION_H */
|
||||||
|
@ -697,11 +697,6 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool is_pow_of_2(size_t x)
|
|
||||||
{
|
|
||||||
return x && (x & (x - 1)) == 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int linker_sanity_check_elf(struct src_obj *obj)
|
static int linker_sanity_check_elf(struct src_obj *obj)
|
||||||
{
|
{
|
||||||
struct src_sec *sec;
|
struct src_sec *sec;
|
||||||
@ -1340,6 +1335,7 @@ recur:
|
|||||||
case BTF_KIND_STRUCT:
|
case BTF_KIND_STRUCT:
|
||||||
case BTF_KIND_UNION:
|
case BTF_KIND_UNION:
|
||||||
case BTF_KIND_ENUM:
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
case BTF_KIND_FWD:
|
case BTF_KIND_FWD:
|
||||||
case BTF_KIND_FUNC:
|
case BTF_KIND_FUNC:
|
||||||
case BTF_KIND_VAR:
|
case BTF_KIND_VAR:
|
||||||
@ -1362,6 +1358,7 @@ recur:
|
|||||||
case BTF_KIND_INT:
|
case BTF_KIND_INT:
|
||||||
case BTF_KIND_FLOAT:
|
case BTF_KIND_FLOAT:
|
||||||
case BTF_KIND_ENUM:
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
/* ignore encoding for int and enum values for enum */
|
/* ignore encoding for int and enum values for enum */
|
||||||
if (t1->size != t2->size) {
|
if (t1->size != t2->size) {
|
||||||
pr_warn("global '%s': incompatible %s '%s' size %u and %u\n",
|
pr_warn("global '%s': incompatible %s '%s' size %u and %u\n",
|
||||||
|
128
src/netlink.c
@ -27,6 +27,14 @@ typedef int (*libbpf_dump_nlmsg_t)(void *cookie, void *msg, struct nlattr **tb);
|
|||||||
typedef int (*__dump_nlmsg_t)(struct nlmsghdr *nlmsg, libbpf_dump_nlmsg_t,
|
typedef int (*__dump_nlmsg_t)(struct nlmsghdr *nlmsg, libbpf_dump_nlmsg_t,
|
||||||
void *cookie);
|
void *cookie);
|
||||||
|
|
||||||
|
struct xdp_link_info {
|
||||||
|
__u32 prog_id;
|
||||||
|
__u32 drv_prog_id;
|
||||||
|
__u32 hw_prog_id;
|
||||||
|
__u32 skb_prog_id;
|
||||||
|
__u8 attach_mode;
|
||||||
|
};
|
||||||
|
|
||||||
struct xdp_id_md {
|
struct xdp_id_md {
|
||||||
int ifindex;
|
int ifindex;
|
||||||
__u32 flags;
|
__u32 flags;
|
||||||
@ -87,29 +95,75 @@ enum {
|
|||||||
NL_DONE,
|
NL_DONE,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static int netlink_recvmsg(int sock, struct msghdr *mhdr, int flags)
|
||||||
|
{
|
||||||
|
int len;
|
||||||
|
|
||||||
|
do {
|
||||||
|
len = recvmsg(sock, mhdr, flags);
|
||||||
|
} while (len < 0 && (errno == EINTR || errno == EAGAIN));
|
||||||
|
|
||||||
|
if (len < 0)
|
||||||
|
return -errno;
|
||||||
|
return len;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int alloc_iov(struct iovec *iov, int len)
|
||||||
|
{
|
||||||
|
void *nbuf;
|
||||||
|
|
||||||
|
nbuf = realloc(iov->iov_base, len);
|
||||||
|
if (!nbuf)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
iov->iov_base = nbuf;
|
||||||
|
iov->iov_len = len;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static int libbpf_netlink_recv(int sock, __u32 nl_pid, int seq,
|
static int libbpf_netlink_recv(int sock, __u32 nl_pid, int seq,
|
||||||
__dump_nlmsg_t _fn, libbpf_dump_nlmsg_t fn,
|
__dump_nlmsg_t _fn, libbpf_dump_nlmsg_t fn,
|
||||||
void *cookie)
|
void *cookie)
|
||||||
{
|
{
|
||||||
|
struct iovec iov = {};
|
||||||
|
struct msghdr mhdr = {
|
||||||
|
.msg_iov = &iov,
|
||||||
|
.msg_iovlen = 1,
|
||||||
|
};
|
||||||
bool multipart = true;
|
bool multipart = true;
|
||||||
struct nlmsgerr *err;
|
struct nlmsgerr *err;
|
||||||
struct nlmsghdr *nh;
|
struct nlmsghdr *nh;
|
||||||
char buf[4096];
|
|
||||||
int len, ret;
|
int len, ret;
|
||||||
|
|
||||||
|
ret = alloc_iov(&iov, 4096);
|
||||||
|
if (ret)
|
||||||
|
goto done;
|
||||||
|
|
||||||
while (multipart) {
|
while (multipart) {
|
||||||
start:
|
start:
|
||||||
multipart = false;
|
multipart = false;
|
||||||
len = recv(sock, buf, sizeof(buf), 0);
|
len = netlink_recvmsg(sock, &mhdr, MSG_PEEK | MSG_TRUNC);
|
||||||
if (len < 0) {
|
if (len < 0) {
|
||||||
ret = -errno;
|
ret = len;
|
||||||
|
goto done;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (len > iov.iov_len) {
|
||||||
|
ret = alloc_iov(&iov, len);
|
||||||
|
if (ret)
|
||||||
|
goto done;
|
||||||
|
}
|
||||||
|
|
||||||
|
len = netlink_recvmsg(sock, &mhdr, 0);
|
||||||
|
if (len < 0) {
|
||||||
|
ret = len;
|
||||||
goto done;
|
goto done;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (len == 0)
|
if (len == 0)
|
||||||
break;
|
break;
|
||||||
|
|
||||||
for (nh = (struct nlmsghdr *)buf; NLMSG_OK(nh, len);
|
for (nh = (struct nlmsghdr *)iov.iov_base; NLMSG_OK(nh, len);
|
||||||
nh = NLMSG_NEXT(nh, len)) {
|
nh = NLMSG_NEXT(nh, len)) {
|
||||||
if (nh->nlmsg_pid != nl_pid) {
|
if (nh->nlmsg_pid != nl_pid) {
|
||||||
ret = -LIBBPF_ERRNO__WRNGPID;
|
ret = -LIBBPF_ERRNO__WRNGPID;
|
||||||
@ -130,7 +184,8 @@ start:
|
|||||||
libbpf_nla_dump_errormsg(nh);
|
libbpf_nla_dump_errormsg(nh);
|
||||||
goto done;
|
goto done;
|
||||||
case NLMSG_DONE:
|
case NLMSG_DONE:
|
||||||
return 0;
|
ret = 0;
|
||||||
|
goto done;
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
@ -142,15 +197,17 @@ start:
|
|||||||
case NL_NEXT:
|
case NL_NEXT:
|
||||||
goto start;
|
goto start;
|
||||||
case NL_DONE:
|
case NL_DONE:
|
||||||
return 0;
|
ret = 0;
|
||||||
|
goto done;
|
||||||
default:
|
default:
|
||||||
return ret;
|
goto done;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
ret = 0;
|
ret = 0;
|
||||||
done:
|
done:
|
||||||
|
free(iov.iov_base);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -239,31 +296,6 @@ int bpf_xdp_detach(int ifindex, __u32 flags, const struct bpf_xdp_attach_opts *o
|
|||||||
return bpf_xdp_attach(ifindex, -1, flags, opts);
|
return bpf_xdp_attach(ifindex, -1, flags, opts);
|
||||||
}
|
}
|
||||||
|
|
||||||
int bpf_set_link_xdp_fd_opts(int ifindex, int fd, __u32 flags,
|
|
||||||
const struct bpf_xdp_set_link_opts *opts)
|
|
||||||
{
|
|
||||||
int old_fd = -1, ret;
|
|
||||||
|
|
||||||
if (!OPTS_VALID(opts, bpf_xdp_set_link_opts))
|
|
||||||
return libbpf_err(-EINVAL);
|
|
||||||
|
|
||||||
if (OPTS_HAS(opts, old_fd)) {
|
|
||||||
old_fd = OPTS_GET(opts, old_fd, -1);
|
|
||||||
flags |= XDP_FLAGS_REPLACE;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = __bpf_set_link_xdp_fd_replace(ifindex, fd, old_fd, flags);
|
|
||||||
return libbpf_err(ret);
|
|
||||||
}
|
|
||||||
|
|
||||||
int bpf_set_link_xdp_fd(int ifindex, int fd, __u32 flags)
|
|
||||||
{
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
ret = __bpf_set_link_xdp_fd_replace(ifindex, fd, 0, flags);
|
|
||||||
return libbpf_err(ret);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int __dump_link_nlmsg(struct nlmsghdr *nlh,
|
static int __dump_link_nlmsg(struct nlmsghdr *nlh,
|
||||||
libbpf_dump_nlmsg_t dump_link_nlmsg, void *cookie)
|
libbpf_dump_nlmsg_t dump_link_nlmsg, void *cookie)
|
||||||
{
|
{
|
||||||
@ -364,30 +396,6 @@ int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int bpf_get_link_xdp_info(int ifindex, struct xdp_link_info *info,
|
|
||||||
size_t info_size, __u32 flags)
|
|
||||||
{
|
|
||||||
LIBBPF_OPTS(bpf_xdp_query_opts, opts);
|
|
||||||
size_t sz;
|
|
||||||
int err;
|
|
||||||
|
|
||||||
if (!info_size)
|
|
||||||
return libbpf_err(-EINVAL);
|
|
||||||
|
|
||||||
err = bpf_xdp_query(ifindex, flags, &opts);
|
|
||||||
if (err)
|
|
||||||
return libbpf_err(err);
|
|
||||||
|
|
||||||
/* struct xdp_link_info field layout matches struct bpf_xdp_query_opts
|
|
||||||
* layout after sz field
|
|
||||||
*/
|
|
||||||
sz = min(info_size, offsetofend(struct xdp_link_info, attach_mode));
|
|
||||||
memcpy(info, &opts.prog_id, sz);
|
|
||||||
memset((void *)info + sz, 0, info_size - sz);
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
int bpf_xdp_query_id(int ifindex, int flags, __u32 *prog_id)
|
int bpf_xdp_query_id(int ifindex, int flags, __u32 *prog_id)
|
||||||
{
|
{
|
||||||
LIBBPF_OPTS(bpf_xdp_query_opts, opts);
|
LIBBPF_OPTS(bpf_xdp_query_opts, opts);
|
||||||
@ -414,11 +422,6 @@ int bpf_xdp_query_id(int ifindex, int flags, __u32 *prog_id)
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
int bpf_get_link_xdp_id(int ifindex, __u32 *prog_id, __u32 flags)
|
|
||||||
{
|
|
||||||
return bpf_xdp_query_id(ifindex, flags, prog_id);
|
|
||||||
}
|
|
||||||
|
|
||||||
typedef int (*qdisc_config_t)(struct libbpf_nla_req *req);
|
typedef int (*qdisc_config_t)(struct libbpf_nla_req *req);
|
||||||
|
|
||||||
static int clsact_config(struct libbpf_nla_req *req)
|
static int clsact_config(struct libbpf_nla_req *req)
|
||||||
@ -584,11 +587,12 @@ static int get_tc_info(struct nlmsghdr *nh, libbpf_dump_nlmsg_t fn,
|
|||||||
|
|
||||||
static int tc_add_fd_and_name(struct libbpf_nla_req *req, int fd)
|
static int tc_add_fd_and_name(struct libbpf_nla_req *req, int fd)
|
||||||
{
|
{
|
||||||
struct bpf_prog_info info = {};
|
struct bpf_prog_info info;
|
||||||
__u32 info_len = sizeof(info);
|
__u32 info_len = sizeof(info);
|
||||||
char name[256];
|
char name[256];
|
||||||
int len, ret;
|
int len, ret;
|
||||||
|
|
||||||
|
memset(&info, 0, info_len);
|
||||||
ret = bpf_obj_get_info_by_fd(fd, &info, &info_len);
|
ret = bpf_obj_get_info_by_fd(fd, &info, &info_len);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -32,7 +32,7 @@ static struct nlattr *nla_next(const struct nlattr *nla, int *remaining)
|
|||||||
|
|
||||||
static int nla_ok(const struct nlattr *nla, int remaining)
|
static int nla_ok(const struct nlattr *nla, int remaining)
|
||||||
{
|
{
|
||||||
return remaining >= sizeof(*nla) &&
|
return remaining >= (int)sizeof(*nla) &&
|
||||||
nla->nla_len >= sizeof(*nla) &&
|
nla->nla_len >= sizeof(*nla) &&
|
||||||
nla->nla_len <= remaining;
|
nla->nla_len <= remaining;
|
||||||
}
|
}
|
||||||
|
658
src/relo_core.c
@ -95,6 +95,7 @@ static const char *core_relo_kind_str(enum bpf_core_relo_kind kind)
|
|||||||
case BPF_CORE_TYPE_ID_LOCAL: return "local_type_id";
|
case BPF_CORE_TYPE_ID_LOCAL: return "local_type_id";
|
||||||
case BPF_CORE_TYPE_ID_TARGET: return "target_type_id";
|
case BPF_CORE_TYPE_ID_TARGET: return "target_type_id";
|
||||||
case BPF_CORE_TYPE_EXISTS: return "type_exists";
|
case BPF_CORE_TYPE_EXISTS: return "type_exists";
|
||||||
|
case BPF_CORE_TYPE_MATCHES: return "type_matches";
|
||||||
case BPF_CORE_TYPE_SIZE: return "type_size";
|
case BPF_CORE_TYPE_SIZE: return "type_size";
|
||||||
case BPF_CORE_ENUMVAL_EXISTS: return "enumval_exists";
|
case BPF_CORE_ENUMVAL_EXISTS: return "enumval_exists";
|
||||||
case BPF_CORE_ENUMVAL_VALUE: return "enumval_value";
|
case BPF_CORE_ENUMVAL_VALUE: return "enumval_value";
|
||||||
@ -123,6 +124,7 @@ static bool core_relo_is_type_based(enum bpf_core_relo_kind kind)
|
|||||||
case BPF_CORE_TYPE_ID_LOCAL:
|
case BPF_CORE_TYPE_ID_LOCAL:
|
||||||
case BPF_CORE_TYPE_ID_TARGET:
|
case BPF_CORE_TYPE_ID_TARGET:
|
||||||
case BPF_CORE_TYPE_EXISTS:
|
case BPF_CORE_TYPE_EXISTS:
|
||||||
|
case BPF_CORE_TYPE_MATCHES:
|
||||||
case BPF_CORE_TYPE_SIZE:
|
case BPF_CORE_TYPE_SIZE:
|
||||||
return true;
|
return true;
|
||||||
default:
|
default:
|
||||||
@ -141,6 +143,86 @@ static bool core_relo_is_enumval_based(enum bpf_core_relo_kind kind)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int __bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id,
|
||||||
|
const struct btf *targ_btf, __u32 targ_id, int level)
|
||||||
|
{
|
||||||
|
const struct btf_type *local_type, *targ_type;
|
||||||
|
int depth = 32; /* max recursion depth */
|
||||||
|
|
||||||
|
/* caller made sure that names match (ignoring flavor suffix) */
|
||||||
|
local_type = btf_type_by_id(local_btf, local_id);
|
||||||
|
targ_type = btf_type_by_id(targ_btf, targ_id);
|
||||||
|
if (!btf_kind_core_compat(local_type, targ_type))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
recur:
|
||||||
|
depth--;
|
||||||
|
if (depth < 0)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
local_type = skip_mods_and_typedefs(local_btf, local_id, &local_id);
|
||||||
|
targ_type = skip_mods_and_typedefs(targ_btf, targ_id, &targ_id);
|
||||||
|
if (!local_type || !targ_type)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
if (!btf_kind_core_compat(local_type, targ_type))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
switch (btf_kind(local_type)) {
|
||||||
|
case BTF_KIND_UNKN:
|
||||||
|
case BTF_KIND_STRUCT:
|
||||||
|
case BTF_KIND_UNION:
|
||||||
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_FWD:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
|
return 1;
|
||||||
|
case BTF_KIND_INT:
|
||||||
|
/* just reject deprecated bitfield-like integers; all other
|
||||||
|
* integers are by default compatible between each other
|
||||||
|
*/
|
||||||
|
return btf_int_offset(local_type) == 0 && btf_int_offset(targ_type) == 0;
|
||||||
|
case BTF_KIND_PTR:
|
||||||
|
local_id = local_type->type;
|
||||||
|
targ_id = targ_type->type;
|
||||||
|
goto recur;
|
||||||
|
case BTF_KIND_ARRAY:
|
||||||
|
local_id = btf_array(local_type)->type;
|
||||||
|
targ_id = btf_array(targ_type)->type;
|
||||||
|
goto recur;
|
||||||
|
case BTF_KIND_FUNC_PROTO: {
|
||||||
|
struct btf_param *local_p = btf_params(local_type);
|
||||||
|
struct btf_param *targ_p = btf_params(targ_type);
|
||||||
|
__u16 local_vlen = btf_vlen(local_type);
|
||||||
|
__u16 targ_vlen = btf_vlen(targ_type);
|
||||||
|
int i, err;
|
||||||
|
|
||||||
|
if (local_vlen != targ_vlen)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
for (i = 0; i < local_vlen; i++, local_p++, targ_p++) {
|
||||||
|
if (level <= 0)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
skip_mods_and_typedefs(local_btf, local_p->type, &local_id);
|
||||||
|
skip_mods_and_typedefs(targ_btf, targ_p->type, &targ_id);
|
||||||
|
err = __bpf_core_types_are_compat(local_btf, local_id, targ_btf, targ_id,
|
||||||
|
level - 1);
|
||||||
|
if (err <= 0)
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* tail recurse for return type check */
|
||||||
|
skip_mods_and_typedefs(local_btf, local_type->type, &local_id);
|
||||||
|
skip_mods_and_typedefs(targ_btf, targ_type->type, &targ_id);
|
||||||
|
goto recur;
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
pr_warn("unexpected kind %s relocated, local [%d], target [%d]\n",
|
||||||
|
btf_kind_str(local_type), local_id, targ_id);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Turn bpf_core_relo into a low- and high-level spec representation,
|
* Turn bpf_core_relo into a low- and high-level spec representation,
|
||||||
* validating correctness along the way, as well as calculating resulting
|
* validating correctness along the way, as well as calculating resulting
|
||||||
@ -167,40 +249,39 @@ static bool core_relo_is_enumval_based(enum bpf_core_relo_kind kind)
|
|||||||
* just a parsed access string representation): [0, 1, 2, 3].
|
* just a parsed access string representation): [0, 1, 2, 3].
|
||||||
*
|
*
|
||||||
* High-level spec will capture only 3 points:
|
* High-level spec will capture only 3 points:
|
||||||
* - intial zero-index access by pointer (&s->... is the same as &s[0]...);
|
* - initial zero-index access by pointer (&s->... is the same as &s[0]...);
|
||||||
* - field 'a' access (corresponds to '2' in low-level spec);
|
* - field 'a' access (corresponds to '2' in low-level spec);
|
||||||
* - array element #3 access (corresponds to '3' in low-level spec).
|
* - array element #3 access (corresponds to '3' in low-level spec).
|
||||||
*
|
*
|
||||||
* Type-based relocations (TYPE_EXISTS/TYPE_SIZE,
|
* Type-based relocations (TYPE_EXISTS/TYPE_MATCHES/TYPE_SIZE,
|
||||||
* TYPE_ID_LOCAL/TYPE_ID_TARGET) don't capture any field information. Their
|
* TYPE_ID_LOCAL/TYPE_ID_TARGET) don't capture any field information. Their
|
||||||
* spec and raw_spec are kept empty.
|
* spec and raw_spec are kept empty.
|
||||||
*
|
*
|
||||||
* Enum value-based relocations (ENUMVAL_EXISTS/ENUMVAL_VALUE) use access
|
* Enum value-based relocations (ENUMVAL_EXISTS/ENUMVAL_VALUE) use access
|
||||||
* string to specify enumerator's value index that need to be relocated.
|
* string to specify enumerator's value index that need to be relocated.
|
||||||
*/
|
*/
|
||||||
static int bpf_core_parse_spec(const char *prog_name, const struct btf *btf,
|
int bpf_core_parse_spec(const char *prog_name, const struct btf *btf,
|
||||||
__u32 type_id,
|
const struct bpf_core_relo *relo,
|
||||||
const char *spec_str,
|
struct bpf_core_spec *spec)
|
||||||
enum bpf_core_relo_kind relo_kind,
|
|
||||||
struct bpf_core_spec *spec)
|
|
||||||
{
|
{
|
||||||
int access_idx, parsed_len, i;
|
int access_idx, parsed_len, i;
|
||||||
struct bpf_core_accessor *acc;
|
struct bpf_core_accessor *acc;
|
||||||
const struct btf_type *t;
|
const struct btf_type *t;
|
||||||
const char *name;
|
const char *name, *spec_str;
|
||||||
__u32 id;
|
__u32 id, name_off;
|
||||||
__s64 sz;
|
__s64 sz;
|
||||||
|
|
||||||
|
spec_str = btf__name_by_offset(btf, relo->access_str_off);
|
||||||
if (str_is_empty(spec_str) || *spec_str == ':')
|
if (str_is_empty(spec_str) || *spec_str == ':')
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
memset(spec, 0, sizeof(*spec));
|
memset(spec, 0, sizeof(*spec));
|
||||||
spec->btf = btf;
|
spec->btf = btf;
|
||||||
spec->root_type_id = type_id;
|
spec->root_type_id = relo->type_id;
|
||||||
spec->relo_kind = relo_kind;
|
spec->relo_kind = relo->kind;
|
||||||
|
|
||||||
/* type-based relocations don't have a field access string */
|
/* type-based relocations don't have a field access string */
|
||||||
if (core_relo_is_type_based(relo_kind)) {
|
if (core_relo_is_type_based(relo->kind)) {
|
||||||
if (strcmp(spec_str, "0"))
|
if (strcmp(spec_str, "0"))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
return 0;
|
return 0;
|
||||||
@ -221,7 +302,7 @@ static int bpf_core_parse_spec(const char *prog_name, const struct btf *btf,
|
|||||||
if (spec->raw_len == 0)
|
if (spec->raw_len == 0)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
t = skip_mods_and_typedefs(btf, type_id, &id);
|
t = skip_mods_and_typedefs(btf, relo->type_id, &id);
|
||||||
if (!t)
|
if (!t)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
@ -231,16 +312,18 @@ static int bpf_core_parse_spec(const char *prog_name, const struct btf *btf,
|
|||||||
acc->idx = access_idx;
|
acc->idx = access_idx;
|
||||||
spec->len++;
|
spec->len++;
|
||||||
|
|
||||||
if (core_relo_is_enumval_based(relo_kind)) {
|
if (core_relo_is_enumval_based(relo->kind)) {
|
||||||
if (!btf_is_enum(t) || spec->raw_len > 1 || access_idx >= btf_vlen(t))
|
if (!btf_is_any_enum(t) || spec->raw_len > 1 || access_idx >= btf_vlen(t))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
/* record enumerator name in a first accessor */
|
/* record enumerator name in a first accessor */
|
||||||
acc->name = btf__name_by_offset(btf, btf_enum(t)[access_idx].name_off);
|
name_off = btf_is_enum(t) ? btf_enum(t)[access_idx].name_off
|
||||||
|
: btf_enum64(t)[access_idx].name_off;
|
||||||
|
acc->name = btf__name_by_offset(btf, name_off);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!core_relo_is_field_based(relo_kind))
|
if (!core_relo_is_field_based(relo->kind))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
sz = btf__resolve_size(btf, id);
|
sz = btf__resolve_size(btf, id);
|
||||||
@ -301,7 +384,7 @@ static int bpf_core_parse_spec(const char *prog_name, const struct btf *btf,
|
|||||||
spec->bit_offset += access_idx * sz * 8;
|
spec->bit_offset += access_idx * sz * 8;
|
||||||
} else {
|
} else {
|
||||||
pr_warn("prog '%s': relo for [%u] %s (at idx %d) captures type [%d] of unexpected kind %s\n",
|
pr_warn("prog '%s': relo for [%u] %s (at idx %d) captures type [%d] of unexpected kind %s\n",
|
||||||
prog_name, type_id, spec_str, i, id, btf_kind_str(t));
|
prog_name, relo->type_id, spec_str, i, id, btf_kind_str(t));
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -341,7 +424,7 @@ recur:
|
|||||||
|
|
||||||
if (btf_is_composite(local_type) && btf_is_composite(targ_type))
|
if (btf_is_composite(local_type) && btf_is_composite(targ_type))
|
||||||
return 1;
|
return 1;
|
||||||
if (btf_kind(local_type) != btf_kind(targ_type))
|
if (!btf_kind_core_compat(local_type, targ_type))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
switch (btf_kind(local_type)) {
|
switch (btf_kind(local_type)) {
|
||||||
@ -349,6 +432,7 @@ recur:
|
|||||||
case BTF_KIND_FLOAT:
|
case BTF_KIND_FLOAT:
|
||||||
return 1;
|
return 1;
|
||||||
case BTF_KIND_FWD:
|
case BTF_KIND_FWD:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
case BTF_KIND_ENUM: {
|
case BTF_KIND_ENUM: {
|
||||||
const char *local_name, *targ_name;
|
const char *local_name, *targ_name;
|
||||||
size_t local_len, targ_len;
|
size_t local_len, targ_len;
|
||||||
@ -478,6 +562,7 @@ static int bpf_core_spec_match(struct bpf_core_spec *local_spec,
|
|||||||
const struct bpf_core_accessor *local_acc;
|
const struct bpf_core_accessor *local_acc;
|
||||||
struct bpf_core_accessor *targ_acc;
|
struct bpf_core_accessor *targ_acc;
|
||||||
int i, sz, matched;
|
int i, sz, matched;
|
||||||
|
__u32 name_off;
|
||||||
|
|
||||||
memset(targ_spec, 0, sizeof(*targ_spec));
|
memset(targ_spec, 0, sizeof(*targ_spec));
|
||||||
targ_spec->btf = targ_btf;
|
targ_spec->btf = targ_btf;
|
||||||
@ -485,9 +570,14 @@ static int bpf_core_spec_match(struct bpf_core_spec *local_spec,
|
|||||||
targ_spec->relo_kind = local_spec->relo_kind;
|
targ_spec->relo_kind = local_spec->relo_kind;
|
||||||
|
|
||||||
if (core_relo_is_type_based(local_spec->relo_kind)) {
|
if (core_relo_is_type_based(local_spec->relo_kind)) {
|
||||||
return bpf_core_types_are_compat(local_spec->btf,
|
if (local_spec->relo_kind == BPF_CORE_TYPE_MATCHES)
|
||||||
local_spec->root_type_id,
|
return bpf_core_types_match(local_spec->btf,
|
||||||
targ_btf, targ_id);
|
local_spec->root_type_id,
|
||||||
|
targ_btf, targ_id);
|
||||||
|
else
|
||||||
|
return bpf_core_types_are_compat(local_spec->btf,
|
||||||
|
local_spec->root_type_id,
|
||||||
|
targ_btf, targ_id);
|
||||||
}
|
}
|
||||||
|
|
||||||
local_acc = &local_spec->spec[0];
|
local_acc = &local_spec->spec[0];
|
||||||
@ -495,18 +585,22 @@ static int bpf_core_spec_match(struct bpf_core_spec *local_spec,
|
|||||||
|
|
||||||
if (core_relo_is_enumval_based(local_spec->relo_kind)) {
|
if (core_relo_is_enumval_based(local_spec->relo_kind)) {
|
||||||
size_t local_essent_len, targ_essent_len;
|
size_t local_essent_len, targ_essent_len;
|
||||||
const struct btf_enum *e;
|
|
||||||
const char *targ_name;
|
const char *targ_name;
|
||||||
|
|
||||||
/* has to resolve to an enum */
|
/* has to resolve to an enum */
|
||||||
targ_type = skip_mods_and_typedefs(targ_spec->btf, targ_id, &targ_id);
|
targ_type = skip_mods_and_typedefs(targ_spec->btf, targ_id, &targ_id);
|
||||||
if (!btf_is_enum(targ_type))
|
if (!btf_is_any_enum(targ_type))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
local_essent_len = bpf_core_essential_name_len(local_acc->name);
|
local_essent_len = bpf_core_essential_name_len(local_acc->name);
|
||||||
|
|
||||||
for (i = 0, e = btf_enum(targ_type); i < btf_vlen(targ_type); i++, e++) {
|
for (i = 0; i < btf_vlen(targ_type); i++) {
|
||||||
targ_name = btf__name_by_offset(targ_spec->btf, e->name_off);
|
if (btf_is_enum(targ_type))
|
||||||
|
name_off = btf_enum(targ_type)[i].name_off;
|
||||||
|
else
|
||||||
|
name_off = btf_enum64(targ_type)[i].name_off;
|
||||||
|
|
||||||
|
targ_name = btf__name_by_offset(targ_spec->btf, name_off);
|
||||||
targ_essent_len = bpf_core_essential_name_len(targ_name);
|
targ_essent_len = bpf_core_essential_name_len(targ_name);
|
||||||
if (targ_essent_len != local_essent_len)
|
if (targ_essent_len != local_essent_len)
|
||||||
continue;
|
continue;
|
||||||
@ -584,7 +678,7 @@ static int bpf_core_spec_match(struct bpf_core_spec *local_spec,
|
|||||||
static int bpf_core_calc_field_relo(const char *prog_name,
|
static int bpf_core_calc_field_relo(const char *prog_name,
|
||||||
const struct bpf_core_relo *relo,
|
const struct bpf_core_relo *relo,
|
||||||
const struct bpf_core_spec *spec,
|
const struct bpf_core_spec *spec,
|
||||||
__u32 *val, __u32 *field_sz, __u32 *type_id,
|
__u64 *val, __u32 *field_sz, __u32 *type_id,
|
||||||
bool *validate)
|
bool *validate)
|
||||||
{
|
{
|
||||||
const struct bpf_core_accessor *acc;
|
const struct bpf_core_accessor *acc;
|
||||||
@ -681,8 +775,7 @@ static int bpf_core_calc_field_relo(const char *prog_name,
|
|||||||
*val = byte_sz;
|
*val = byte_sz;
|
||||||
break;
|
break;
|
||||||
case BPF_CORE_FIELD_SIGNED:
|
case BPF_CORE_FIELD_SIGNED:
|
||||||
/* enums will be assumed unsigned */
|
*val = (btf_is_any_enum(mt) && BTF_INFO_KFLAG(mt->info)) ||
|
||||||
*val = btf_is_enum(mt) ||
|
|
||||||
(btf_int_encoding(mt) & BTF_INT_SIGNED);
|
(btf_int_encoding(mt) & BTF_INT_SIGNED);
|
||||||
if (validate)
|
if (validate)
|
||||||
*validate = true; /* signedness is never ambiguous */
|
*validate = true; /* signedness is never ambiguous */
|
||||||
@ -709,7 +802,7 @@ static int bpf_core_calc_field_relo(const char *prog_name,
|
|||||||
|
|
||||||
static int bpf_core_calc_type_relo(const struct bpf_core_relo *relo,
|
static int bpf_core_calc_type_relo(const struct bpf_core_relo *relo,
|
||||||
const struct bpf_core_spec *spec,
|
const struct bpf_core_spec *spec,
|
||||||
__u32 *val, bool *validate)
|
__u64 *val, bool *validate)
|
||||||
{
|
{
|
||||||
__s64 sz;
|
__s64 sz;
|
||||||
|
|
||||||
@ -733,6 +826,7 @@ static int bpf_core_calc_type_relo(const struct bpf_core_relo *relo,
|
|||||||
*validate = false;
|
*validate = false;
|
||||||
break;
|
break;
|
||||||
case BPF_CORE_TYPE_EXISTS:
|
case BPF_CORE_TYPE_EXISTS:
|
||||||
|
case BPF_CORE_TYPE_MATCHES:
|
||||||
*val = 1;
|
*val = 1;
|
||||||
break;
|
break;
|
||||||
case BPF_CORE_TYPE_SIZE:
|
case BPF_CORE_TYPE_SIZE:
|
||||||
@ -752,10 +846,9 @@ static int bpf_core_calc_type_relo(const struct bpf_core_relo *relo,
|
|||||||
|
|
||||||
static int bpf_core_calc_enumval_relo(const struct bpf_core_relo *relo,
|
static int bpf_core_calc_enumval_relo(const struct bpf_core_relo *relo,
|
||||||
const struct bpf_core_spec *spec,
|
const struct bpf_core_spec *spec,
|
||||||
__u32 *val)
|
__u64 *val)
|
||||||
{
|
{
|
||||||
const struct btf_type *t;
|
const struct btf_type *t;
|
||||||
const struct btf_enum *e;
|
|
||||||
|
|
||||||
switch (relo->kind) {
|
switch (relo->kind) {
|
||||||
case BPF_CORE_ENUMVAL_EXISTS:
|
case BPF_CORE_ENUMVAL_EXISTS:
|
||||||
@ -765,8 +858,10 @@ static int bpf_core_calc_enumval_relo(const struct bpf_core_relo *relo,
|
|||||||
if (!spec)
|
if (!spec)
|
||||||
return -EUCLEAN; /* request instruction poisoning */
|
return -EUCLEAN; /* request instruction poisoning */
|
||||||
t = btf_type_by_id(spec->btf, spec->spec[0].type_id);
|
t = btf_type_by_id(spec->btf, spec->spec[0].type_id);
|
||||||
e = btf_enum(t) + spec->spec[0].idx;
|
if (btf_is_enum(t))
|
||||||
*val = e->val;
|
*val = btf_enum(t)[spec->spec[0].idx].val;
|
||||||
|
else
|
||||||
|
*val = btf_enum64_value(btf_enum64(t) + spec->spec[0].idx);
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
@ -775,31 +870,6 @@ static int bpf_core_calc_enumval_relo(const struct bpf_core_relo *relo,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
struct bpf_core_relo_res
|
|
||||||
{
|
|
||||||
/* expected value in the instruction, unless validate == false */
|
|
||||||
__u32 orig_val;
|
|
||||||
/* new value that needs to be patched up to */
|
|
||||||
__u32 new_val;
|
|
||||||
/* relocation unsuccessful, poison instruction, but don't fail load */
|
|
||||||
bool poison;
|
|
||||||
/* some relocations can't be validated against orig_val */
|
|
||||||
bool validate;
|
|
||||||
/* for field byte offset relocations or the forms:
|
|
||||||
* *(T *)(rX + <off>) = rY
|
|
||||||
* rX = *(T *)(rY + <off>),
|
|
||||||
* we remember original and resolved field size to adjust direct
|
|
||||||
* memory loads of pointers and integers; this is necessary for 32-bit
|
|
||||||
* host kernel architectures, but also allows to automatically
|
|
||||||
* relocate fields that were resized from, e.g., u32 to u64, etc.
|
|
||||||
*/
|
|
||||||
bool fail_memsz_adjust;
|
|
||||||
__u32 orig_sz;
|
|
||||||
__u32 orig_type_id;
|
|
||||||
__u32 new_sz;
|
|
||||||
__u32 new_type_id;
|
|
||||||
};
|
|
||||||
|
|
||||||
/* Calculate original and target relocation values, given local and target
|
/* Calculate original and target relocation values, given local and target
|
||||||
* specs and relocation kind. These values are calculated for each candidate.
|
* specs and relocation kind. These values are calculated for each candidate.
|
||||||
* If there are multiple candidates, resulting values should all be consistent
|
* If there are multiple candidates, resulting values should all be consistent
|
||||||
@ -951,11 +1021,11 @@ static int insn_bytes_to_bpf_size(__u32 sz)
|
|||||||
* 5. *(T *)(rX + <off>) = rY, where T is one of {u8, u16, u32, u64};
|
* 5. *(T *)(rX + <off>) = rY, where T is one of {u8, u16, u32, u64};
|
||||||
* 6. *(T *)(rX + <off>) = <imm>, where T is one of {u8, u16, u32, u64}.
|
* 6. *(T *)(rX + <off>) = <imm>, where T is one of {u8, u16, u32, u64}.
|
||||||
*/
|
*/
|
||||||
static int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
|
int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
|
||||||
int insn_idx, const struct bpf_core_relo *relo,
|
int insn_idx, const struct bpf_core_relo *relo,
|
||||||
int relo_idx, const struct bpf_core_relo_res *res)
|
int relo_idx, const struct bpf_core_relo_res *res)
|
||||||
{
|
{
|
||||||
__u32 orig_val, new_val;
|
__u64 orig_val, new_val;
|
||||||
__u8 class;
|
__u8 class;
|
||||||
|
|
||||||
class = BPF_CLASS(insn->code);
|
class = BPF_CLASS(insn->code);
|
||||||
@ -980,28 +1050,30 @@ poison:
|
|||||||
if (BPF_SRC(insn->code) != BPF_K)
|
if (BPF_SRC(insn->code) != BPF_K)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
if (res->validate && insn->imm != orig_val) {
|
if (res->validate && insn->imm != orig_val) {
|
||||||
pr_warn("prog '%s': relo #%d: unexpected insn #%d (ALU/ALU64) value: got %u, exp %u -> %u\n",
|
pr_warn("prog '%s': relo #%d: unexpected insn #%d (ALU/ALU64) value: got %u, exp %llu -> %llu\n",
|
||||||
prog_name, relo_idx,
|
prog_name, relo_idx,
|
||||||
insn_idx, insn->imm, orig_val, new_val);
|
insn_idx, insn->imm, (unsigned long long)orig_val,
|
||||||
|
(unsigned long long)new_val);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
orig_val = insn->imm;
|
orig_val = insn->imm;
|
||||||
insn->imm = new_val;
|
insn->imm = new_val;
|
||||||
pr_debug("prog '%s': relo #%d: patched insn #%d (ALU/ALU64) imm %u -> %u\n",
|
pr_debug("prog '%s': relo #%d: patched insn #%d (ALU/ALU64) imm %llu -> %llu\n",
|
||||||
prog_name, relo_idx, insn_idx,
|
prog_name, relo_idx, insn_idx,
|
||||||
orig_val, new_val);
|
(unsigned long long)orig_val, (unsigned long long)new_val);
|
||||||
break;
|
break;
|
||||||
case BPF_LDX:
|
case BPF_LDX:
|
||||||
case BPF_ST:
|
case BPF_ST:
|
||||||
case BPF_STX:
|
case BPF_STX:
|
||||||
if (res->validate && insn->off != orig_val) {
|
if (res->validate && insn->off != orig_val) {
|
||||||
pr_warn("prog '%s': relo #%d: unexpected insn #%d (LDX/ST/STX) value: got %u, exp %u -> %u\n",
|
pr_warn("prog '%s': relo #%d: unexpected insn #%d (LDX/ST/STX) value: got %u, exp %llu -> %llu\n",
|
||||||
prog_name, relo_idx, insn_idx, insn->off, orig_val, new_val);
|
prog_name, relo_idx, insn_idx, insn->off, (unsigned long long)orig_val,
|
||||||
|
(unsigned long long)new_val);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
if (new_val > SHRT_MAX) {
|
if (new_val > SHRT_MAX) {
|
||||||
pr_warn("prog '%s': relo #%d: insn #%d (LDX/ST/STX) value too big: %u\n",
|
pr_warn("prog '%s': relo #%d: insn #%d (LDX/ST/STX) value too big: %llu\n",
|
||||||
prog_name, relo_idx, insn_idx, new_val);
|
prog_name, relo_idx, insn_idx, (unsigned long long)new_val);
|
||||||
return -ERANGE;
|
return -ERANGE;
|
||||||
}
|
}
|
||||||
if (res->fail_memsz_adjust) {
|
if (res->fail_memsz_adjust) {
|
||||||
@ -1013,8 +1085,9 @@ poison:
|
|||||||
|
|
||||||
orig_val = insn->off;
|
orig_val = insn->off;
|
||||||
insn->off = new_val;
|
insn->off = new_val;
|
||||||
pr_debug("prog '%s': relo #%d: patched insn #%d (LDX/ST/STX) off %u -> %u\n",
|
pr_debug("prog '%s': relo #%d: patched insn #%d (LDX/ST/STX) off %llu -> %llu\n",
|
||||||
prog_name, relo_idx, insn_idx, orig_val, new_val);
|
prog_name, relo_idx, insn_idx, (unsigned long long)orig_val,
|
||||||
|
(unsigned long long)new_val);
|
||||||
|
|
||||||
if (res->new_sz != res->orig_sz) {
|
if (res->new_sz != res->orig_sz) {
|
||||||
int insn_bytes_sz, insn_bpf_sz;
|
int insn_bytes_sz, insn_bpf_sz;
|
||||||
@ -1050,20 +1123,20 @@ poison:
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
imm = insn[0].imm + ((__u64)insn[1].imm << 32);
|
imm = (__u32)insn[0].imm | ((__u64)insn[1].imm << 32);
|
||||||
if (res->validate && imm != orig_val) {
|
if (res->validate && imm != orig_val) {
|
||||||
pr_warn("prog '%s': relo #%d: unexpected insn #%d (LDIMM64) value: got %llu, exp %u -> %u\n",
|
pr_warn("prog '%s': relo #%d: unexpected insn #%d (LDIMM64) value: got %llu, exp %llu -> %llu\n",
|
||||||
prog_name, relo_idx,
|
prog_name, relo_idx,
|
||||||
insn_idx, (unsigned long long)imm,
|
insn_idx, (unsigned long long)imm,
|
||||||
orig_val, new_val);
|
(unsigned long long)orig_val, (unsigned long long)new_val);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
insn[0].imm = new_val;
|
insn[0].imm = new_val;
|
||||||
insn[1].imm = 0; /* currently only 32-bit values are supported */
|
insn[1].imm = new_val >> 32;
|
||||||
pr_debug("prog '%s': relo #%d: patched insn #%d (LDIMM64) imm64 %llu -> %u\n",
|
pr_debug("prog '%s': relo #%d: patched insn #%d (LDIMM64) imm64 %llu -> %llu\n",
|
||||||
prog_name, relo_idx, insn_idx,
|
prog_name, relo_idx, insn_idx,
|
||||||
(unsigned long long)imm, new_val);
|
(unsigned long long)imm, (unsigned long long)new_val);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
@ -1080,55 +1153,82 @@ poison:
|
|||||||
* [<type-id>] (<type-name>) + <raw-spec> => <offset>@<spec>,
|
* [<type-id>] (<type-name>) + <raw-spec> => <offset>@<spec>,
|
||||||
* where <spec> is a C-syntax view of recorded field access, e.g.: x.a[3].b
|
* where <spec> is a C-syntax view of recorded field access, e.g.: x.a[3].b
|
||||||
*/
|
*/
|
||||||
static void bpf_core_dump_spec(const char *prog_name, int level, const struct bpf_core_spec *spec)
|
int bpf_core_format_spec(char *buf, size_t buf_sz, const struct bpf_core_spec *spec)
|
||||||
{
|
{
|
||||||
const struct btf_type *t;
|
const struct btf_type *t;
|
||||||
const struct btf_enum *e;
|
|
||||||
const char *s;
|
const char *s;
|
||||||
__u32 type_id;
|
__u32 type_id;
|
||||||
int i;
|
int i, len = 0;
|
||||||
|
|
||||||
|
#define append_buf(fmt, args...) \
|
||||||
|
({ \
|
||||||
|
int r; \
|
||||||
|
r = snprintf(buf, buf_sz, fmt, ##args); \
|
||||||
|
len += r; \
|
||||||
|
if (r >= buf_sz) \
|
||||||
|
r = buf_sz; \
|
||||||
|
buf += r; \
|
||||||
|
buf_sz -= r; \
|
||||||
|
})
|
||||||
|
|
||||||
type_id = spec->root_type_id;
|
type_id = spec->root_type_id;
|
||||||
t = btf_type_by_id(spec->btf, type_id);
|
t = btf_type_by_id(spec->btf, type_id);
|
||||||
s = btf__name_by_offset(spec->btf, t->name_off);
|
s = btf__name_by_offset(spec->btf, t->name_off);
|
||||||
|
|
||||||
libbpf_print(level, "[%u] %s %s", type_id, btf_kind_str(t), str_is_empty(s) ? "<anon>" : s);
|
append_buf("<%s> [%u] %s %s",
|
||||||
|
core_relo_kind_str(spec->relo_kind),
|
||||||
|
type_id, btf_kind_str(t), str_is_empty(s) ? "<anon>" : s);
|
||||||
|
|
||||||
if (core_relo_is_type_based(spec->relo_kind))
|
if (core_relo_is_type_based(spec->relo_kind))
|
||||||
return;
|
return len;
|
||||||
|
|
||||||
if (core_relo_is_enumval_based(spec->relo_kind)) {
|
if (core_relo_is_enumval_based(spec->relo_kind)) {
|
||||||
t = skip_mods_and_typedefs(spec->btf, type_id, NULL);
|
t = skip_mods_and_typedefs(spec->btf, type_id, NULL);
|
||||||
e = btf_enum(t) + spec->raw_spec[0];
|
if (btf_is_enum(t)) {
|
||||||
s = btf__name_by_offset(spec->btf, e->name_off);
|
const struct btf_enum *e;
|
||||||
|
const char *fmt_str;
|
||||||
|
|
||||||
libbpf_print(level, "::%s = %u", s, e->val);
|
e = btf_enum(t) + spec->raw_spec[0];
|
||||||
return;
|
s = btf__name_by_offset(spec->btf, e->name_off);
|
||||||
|
fmt_str = BTF_INFO_KFLAG(t->info) ? "::%s = %d" : "::%s = %u";
|
||||||
|
append_buf(fmt_str, s, e->val);
|
||||||
|
} else {
|
||||||
|
const struct btf_enum64 *e;
|
||||||
|
const char *fmt_str;
|
||||||
|
|
||||||
|
e = btf_enum64(t) + spec->raw_spec[0];
|
||||||
|
s = btf__name_by_offset(spec->btf, e->name_off);
|
||||||
|
fmt_str = BTF_INFO_KFLAG(t->info) ? "::%s = %lld" : "::%s = %llu";
|
||||||
|
append_buf(fmt_str, s, (unsigned long long)btf_enum64_value(e));
|
||||||
|
}
|
||||||
|
return len;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (core_relo_is_field_based(spec->relo_kind)) {
|
if (core_relo_is_field_based(spec->relo_kind)) {
|
||||||
for (i = 0; i < spec->len; i++) {
|
for (i = 0; i < spec->len; i++) {
|
||||||
if (spec->spec[i].name)
|
if (spec->spec[i].name)
|
||||||
libbpf_print(level, ".%s", spec->spec[i].name);
|
append_buf(".%s", spec->spec[i].name);
|
||||||
else if (i > 0 || spec->spec[i].idx > 0)
|
else if (i > 0 || spec->spec[i].idx > 0)
|
||||||
libbpf_print(level, "[%u]", spec->spec[i].idx);
|
append_buf("[%u]", spec->spec[i].idx);
|
||||||
}
|
}
|
||||||
|
|
||||||
libbpf_print(level, " (");
|
append_buf(" (");
|
||||||
for (i = 0; i < spec->raw_len; i++)
|
for (i = 0; i < spec->raw_len; i++)
|
||||||
libbpf_print(level, "%s%d", i == 0 ? "" : ":", spec->raw_spec[i]);
|
append_buf("%s%d", i == 0 ? "" : ":", spec->raw_spec[i]);
|
||||||
|
|
||||||
if (spec->bit_offset % 8)
|
if (spec->bit_offset % 8)
|
||||||
libbpf_print(level, " @ offset %u.%u)",
|
append_buf(" @ offset %u.%u)", spec->bit_offset / 8, spec->bit_offset % 8);
|
||||||
spec->bit_offset / 8, spec->bit_offset % 8);
|
|
||||||
else
|
else
|
||||||
libbpf_print(level, " @ offset %u)", spec->bit_offset / 8);
|
append_buf(" @ offset %u)", spec->bit_offset / 8);
|
||||||
return;
|
return len;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return len;
|
||||||
|
#undef append_buf
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* CO-RE relocate single instruction.
|
* Calculate CO-RE relocation target result.
|
||||||
*
|
*
|
||||||
* The outline and important points of the algorithm:
|
* The outline and important points of the algorithm:
|
||||||
* 1. For given local type, find corresponding candidate target types.
|
* 1. For given local type, find corresponding candidate target types.
|
||||||
@ -1159,11 +1259,11 @@ static void bpf_core_dump_spec(const char *prog_name, int level, const struct bp
|
|||||||
* 3. It is supported and expected that there might be multiple flavors
|
* 3. It is supported and expected that there might be multiple flavors
|
||||||
* matching the spec. As long as all the specs resolve to the same set of
|
* matching the spec. As long as all the specs resolve to the same set of
|
||||||
* offsets across all candidates, there is no error. If there is any
|
* offsets across all candidates, there is no error. If there is any
|
||||||
* ambiguity, CO-RE relocation will fail. This is necessary to accomodate
|
* ambiguity, CO-RE relocation will fail. This is necessary to accommodate
|
||||||
* imprefection of BTF deduplication, which can cause slight duplication of
|
* imperfection of BTF deduplication, which can cause slight duplication of
|
||||||
* the same BTF type, if some directly or indirectly referenced (by
|
* the same BTF type, if some directly or indirectly referenced (by
|
||||||
* pointer) type gets resolved to different actual types in different
|
* pointer) type gets resolved to different actual types in different
|
||||||
* object files. If such situation occurs, deduplicated BTF will end up
|
* object files. If such a situation occurs, deduplicated BTF will end up
|
||||||
* with two (or more) structurally identical types, which differ only in
|
* with two (or more) structurally identical types, which differ only in
|
||||||
* types they refer to through pointer. This should be OK in most cases and
|
* types they refer to through pointer. This should be OK in most cases and
|
||||||
* is not an error.
|
* is not an error.
|
||||||
@ -1177,22 +1277,22 @@ static void bpf_core_dump_spec(const char *prog_name, int level, const struct bp
|
|||||||
* between multiple relocations for the same type ID and is updated as some
|
* between multiple relocations for the same type ID and is updated as some
|
||||||
* of the candidates are pruned due to structural incompatibility.
|
* of the candidates are pruned due to structural incompatibility.
|
||||||
*/
|
*/
|
||||||
int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
|
int bpf_core_calc_relo_insn(const char *prog_name,
|
||||||
int insn_idx,
|
const struct bpf_core_relo *relo,
|
||||||
const struct bpf_core_relo *relo,
|
int relo_idx,
|
||||||
int relo_idx,
|
const struct btf *local_btf,
|
||||||
const struct btf *local_btf,
|
struct bpf_core_cand_list *cands,
|
||||||
struct bpf_core_cand_list *cands,
|
struct bpf_core_spec *specs_scratch,
|
||||||
struct bpf_core_spec *specs_scratch)
|
struct bpf_core_relo_res *targ_res)
|
||||||
{
|
{
|
||||||
struct bpf_core_spec *local_spec = &specs_scratch[0];
|
struct bpf_core_spec *local_spec = &specs_scratch[0];
|
||||||
struct bpf_core_spec *cand_spec = &specs_scratch[1];
|
struct bpf_core_spec *cand_spec = &specs_scratch[1];
|
||||||
struct bpf_core_spec *targ_spec = &specs_scratch[2];
|
struct bpf_core_spec *targ_spec = &specs_scratch[2];
|
||||||
struct bpf_core_relo_res cand_res, targ_res;
|
struct bpf_core_relo_res cand_res;
|
||||||
const struct btf_type *local_type;
|
const struct btf_type *local_type;
|
||||||
const char *local_name;
|
const char *local_name;
|
||||||
__u32 local_id;
|
__u32 local_id;
|
||||||
const char *spec_str;
|
char spec_buf[256];
|
||||||
int i, j, err;
|
int i, j, err;
|
||||||
|
|
||||||
local_id = relo->type_id;
|
local_id = relo->type_id;
|
||||||
@ -1201,38 +1301,34 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
|
|||||||
if (!local_name)
|
if (!local_name)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
spec_str = btf__name_by_offset(local_btf, relo->access_str_off);
|
err = bpf_core_parse_spec(prog_name, local_btf, relo, local_spec);
|
||||||
if (str_is_empty(spec_str))
|
|
||||||
return -EINVAL;
|
|
||||||
|
|
||||||
err = bpf_core_parse_spec(prog_name, local_btf, local_id, spec_str,
|
|
||||||
relo->kind, local_spec);
|
|
||||||
if (err) {
|
if (err) {
|
||||||
|
const char *spec_str;
|
||||||
|
|
||||||
|
spec_str = btf__name_by_offset(local_btf, relo->access_str_off);
|
||||||
pr_warn("prog '%s': relo #%d: parsing [%d] %s %s + %s failed: %d\n",
|
pr_warn("prog '%s': relo #%d: parsing [%d] %s %s + %s failed: %d\n",
|
||||||
prog_name, relo_idx, local_id, btf_kind_str(local_type),
|
prog_name, relo_idx, local_id, btf_kind_str(local_type),
|
||||||
str_is_empty(local_name) ? "<anon>" : local_name,
|
str_is_empty(local_name) ? "<anon>" : local_name,
|
||||||
spec_str, err);
|
spec_str ?: "<?>", err);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
pr_debug("prog '%s': relo #%d: kind <%s> (%d), spec is ", prog_name,
|
bpf_core_format_spec(spec_buf, sizeof(spec_buf), local_spec);
|
||||||
relo_idx, core_relo_kind_str(relo->kind), relo->kind);
|
pr_debug("prog '%s': relo #%d: %s\n", prog_name, relo_idx, spec_buf);
|
||||||
bpf_core_dump_spec(prog_name, LIBBPF_DEBUG, local_spec);
|
|
||||||
libbpf_print(LIBBPF_DEBUG, "\n");
|
|
||||||
|
|
||||||
/* TYPE_ID_LOCAL relo is special and doesn't need candidate search */
|
/* TYPE_ID_LOCAL relo is special and doesn't need candidate search */
|
||||||
if (relo->kind == BPF_CORE_TYPE_ID_LOCAL) {
|
if (relo->kind == BPF_CORE_TYPE_ID_LOCAL) {
|
||||||
/* bpf_insn's imm value could get out of sync during linking */
|
/* bpf_insn's imm value could get out of sync during linking */
|
||||||
memset(&targ_res, 0, sizeof(targ_res));
|
memset(targ_res, 0, sizeof(*targ_res));
|
||||||
targ_res.validate = false;
|
targ_res->validate = false;
|
||||||
targ_res.poison = false;
|
targ_res->poison = false;
|
||||||
targ_res.orig_val = local_spec->root_type_id;
|
targ_res->orig_val = local_spec->root_type_id;
|
||||||
targ_res.new_val = local_spec->root_type_id;
|
targ_res->new_val = local_spec->root_type_id;
|
||||||
goto patch_insn;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* libbpf doesn't support candidate search for anonymous types */
|
/* libbpf doesn't support candidate search for anonymous types */
|
||||||
if (str_is_empty(spec_str)) {
|
if (str_is_empty(local_name)) {
|
||||||
pr_warn("prog '%s': relo #%d: <%s> (%d) relocation doesn't support anonymous types\n",
|
pr_warn("prog '%s': relo #%d: <%s> (%d) relocation doesn't support anonymous types\n",
|
||||||
prog_name, relo_idx, core_relo_kind_str(relo->kind), relo->kind);
|
prog_name, relo_idx, core_relo_kind_str(relo->kind), relo->kind);
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
@ -1242,17 +1338,15 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
|
|||||||
err = bpf_core_spec_match(local_spec, cands->cands[i].btf,
|
err = bpf_core_spec_match(local_spec, cands->cands[i].btf,
|
||||||
cands->cands[i].id, cand_spec);
|
cands->cands[i].id, cand_spec);
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
pr_warn("prog '%s': relo #%d: error matching candidate #%d ",
|
bpf_core_format_spec(spec_buf, sizeof(spec_buf), cand_spec);
|
||||||
prog_name, relo_idx, i);
|
pr_warn("prog '%s': relo #%d: error matching candidate #%d %s: %d\n ",
|
||||||
bpf_core_dump_spec(prog_name, LIBBPF_WARN, cand_spec);
|
prog_name, relo_idx, i, spec_buf, err);
|
||||||
libbpf_print(LIBBPF_WARN, ": %d\n", err);
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
pr_debug("prog '%s': relo #%d: %s candidate #%d ", prog_name,
|
bpf_core_format_spec(spec_buf, sizeof(spec_buf), cand_spec);
|
||||||
relo_idx, err == 0 ? "non-matching" : "matching", i);
|
pr_debug("prog '%s': relo #%d: %s candidate #%d %s\n", prog_name,
|
||||||
bpf_core_dump_spec(prog_name, LIBBPF_DEBUG, cand_spec);
|
relo_idx, err == 0 ? "non-matching" : "matching", i, spec_buf);
|
||||||
libbpf_print(LIBBPF_DEBUG, "\n");
|
|
||||||
|
|
||||||
if (err == 0)
|
if (err == 0)
|
||||||
continue;
|
continue;
|
||||||
@ -1262,7 +1356,7 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
|
|||||||
return err;
|
return err;
|
||||||
|
|
||||||
if (j == 0) {
|
if (j == 0) {
|
||||||
targ_res = cand_res;
|
*targ_res = cand_res;
|
||||||
*targ_spec = *cand_spec;
|
*targ_spec = *cand_spec;
|
||||||
} else if (cand_spec->bit_offset != targ_spec->bit_offset) {
|
} else if (cand_spec->bit_offset != targ_spec->bit_offset) {
|
||||||
/* if there are many field relo candidates, they
|
/* if there are many field relo candidates, they
|
||||||
@ -1272,15 +1366,18 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
|
|||||||
prog_name, relo_idx, cand_spec->bit_offset,
|
prog_name, relo_idx, cand_spec->bit_offset,
|
||||||
targ_spec->bit_offset);
|
targ_spec->bit_offset);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
} else if (cand_res.poison != targ_res.poison || cand_res.new_val != targ_res.new_val) {
|
} else if (cand_res.poison != targ_res->poison ||
|
||||||
|
cand_res.new_val != targ_res->new_val) {
|
||||||
/* all candidates should result in the same relocation
|
/* all candidates should result in the same relocation
|
||||||
* decision and value, otherwise it's dangerous to
|
* decision and value, otherwise it's dangerous to
|
||||||
* proceed due to ambiguity
|
* proceed due to ambiguity
|
||||||
*/
|
*/
|
||||||
pr_warn("prog '%s': relo #%d: relocation decision ambiguity: %s %u != %s %u\n",
|
pr_warn("prog '%s': relo #%d: relocation decision ambiguity: %s %llu != %s %llu\n",
|
||||||
prog_name, relo_idx,
|
prog_name, relo_idx,
|
||||||
cand_res.poison ? "failure" : "success", cand_res.new_val,
|
cand_res.poison ? "failure" : "success",
|
||||||
targ_res.poison ? "failure" : "success", targ_res.new_val);
|
(unsigned long long)cand_res.new_val,
|
||||||
|
targ_res->poison ? "failure" : "success",
|
||||||
|
(unsigned long long)targ_res->new_val);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1314,19 +1411,280 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
|
|||||||
prog_name, relo_idx);
|
prog_name, relo_idx);
|
||||||
|
|
||||||
/* calculate single target relo result explicitly */
|
/* calculate single target relo result explicitly */
|
||||||
err = bpf_core_calc_relo(prog_name, relo, relo_idx, local_spec, NULL, &targ_res);
|
err = bpf_core_calc_relo(prog_name, relo, relo_idx, local_spec, NULL, targ_res);
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
patch_insn:
|
|
||||||
/* bpf_core_patch_insn() should know how to handle missing targ_spec */
|
|
||||||
err = bpf_core_patch_insn(prog_name, insn, insn_idx, relo, relo_idx, &targ_res);
|
|
||||||
if (err) {
|
|
||||||
pr_warn("prog '%s': relo #%d: failed to patch insn #%u: %d\n",
|
|
||||||
prog_name, relo_idx, relo->insn_off / 8, err);
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool bpf_core_names_match(const struct btf *local_btf, size_t local_name_off,
|
||||||
|
const struct btf *targ_btf, size_t targ_name_off)
|
||||||
|
{
|
||||||
|
const char *local_n, *targ_n;
|
||||||
|
size_t local_len, targ_len;
|
||||||
|
|
||||||
|
local_n = btf__name_by_offset(local_btf, local_name_off);
|
||||||
|
targ_n = btf__name_by_offset(targ_btf, targ_name_off);
|
||||||
|
|
||||||
|
if (str_is_empty(targ_n))
|
||||||
|
return str_is_empty(local_n);
|
||||||
|
|
||||||
|
targ_len = bpf_core_essential_name_len(targ_n);
|
||||||
|
local_len = bpf_core_essential_name_len(local_n);
|
||||||
|
|
||||||
|
return targ_len == local_len && strncmp(local_n, targ_n, local_len) == 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int bpf_core_enums_match(const struct btf *local_btf, const struct btf_type *local_t,
|
||||||
|
const struct btf *targ_btf, const struct btf_type *targ_t)
|
||||||
|
{
|
||||||
|
__u16 local_vlen = btf_vlen(local_t);
|
||||||
|
__u16 targ_vlen = btf_vlen(targ_t);
|
||||||
|
int i, j;
|
||||||
|
|
||||||
|
if (local_t->size != targ_t->size)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (local_vlen > targ_vlen)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/* iterate over the local enum's variants and make sure each has
|
||||||
|
* a symbolic name correspondent in the target
|
||||||
|
*/
|
||||||
|
for (i = 0; i < local_vlen; i++) {
|
||||||
|
bool matched = false;
|
||||||
|
__u32 local_n_off, targ_n_off;
|
||||||
|
|
||||||
|
local_n_off = btf_is_enum(local_t) ? btf_enum(local_t)[i].name_off :
|
||||||
|
btf_enum64(local_t)[i].name_off;
|
||||||
|
|
||||||
|
for (j = 0; j < targ_vlen; j++) {
|
||||||
|
targ_n_off = btf_is_enum(targ_t) ? btf_enum(targ_t)[j].name_off :
|
||||||
|
btf_enum64(targ_t)[j].name_off;
|
||||||
|
|
||||||
|
if (bpf_core_names_match(local_btf, local_n_off, targ_btf, targ_n_off)) {
|
||||||
|
matched = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!matched)
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int bpf_core_composites_match(const struct btf *local_btf, const struct btf_type *local_t,
|
||||||
|
const struct btf *targ_btf, const struct btf_type *targ_t,
|
||||||
|
bool behind_ptr, int level)
|
||||||
|
{
|
||||||
|
const struct btf_member *local_m = btf_members(local_t);
|
||||||
|
__u16 local_vlen = btf_vlen(local_t);
|
||||||
|
__u16 targ_vlen = btf_vlen(targ_t);
|
||||||
|
int i, j, err;
|
||||||
|
|
||||||
|
if (local_vlen > targ_vlen)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/* check that all local members have a match in the target */
|
||||||
|
for (i = 0; i < local_vlen; i++, local_m++) {
|
||||||
|
const struct btf_member *targ_m = btf_members(targ_t);
|
||||||
|
bool matched = false;
|
||||||
|
|
||||||
|
for (j = 0; j < targ_vlen; j++, targ_m++) {
|
||||||
|
if (!bpf_core_names_match(local_btf, local_m->name_off,
|
||||||
|
targ_btf, targ_m->name_off))
|
||||||
|
continue;
|
||||||
|
|
||||||
|
err = __bpf_core_types_match(local_btf, local_m->type, targ_btf,
|
||||||
|
targ_m->type, behind_ptr, level - 1);
|
||||||
|
if (err < 0)
|
||||||
|
return err;
|
||||||
|
if (err > 0) {
|
||||||
|
matched = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!matched)
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Check that two types "match". This function assumes that root types were
|
||||||
|
* already checked for name match.
|
||||||
|
*
|
||||||
|
* The matching relation is defined as follows:
|
||||||
|
* - modifiers and typedefs are stripped (and, hence, effectively ignored)
|
||||||
|
* - generally speaking types need to be of same kind (struct vs. struct, union
|
||||||
|
* vs. union, etc.)
|
||||||
|
* - exceptions are struct/union behind a pointer which could also match a
|
||||||
|
* forward declaration of a struct or union, respectively, and enum vs.
|
||||||
|
* enum64 (see below)
|
||||||
|
* Then, depending on type:
|
||||||
|
* - integers:
|
||||||
|
* - match if size and signedness match
|
||||||
|
* - arrays & pointers:
|
||||||
|
* - target types are recursively matched
|
||||||
|
* - structs & unions:
|
||||||
|
* - local members need to exist in target with the same name
|
||||||
|
* - for each member we recursively check match unless it is already behind a
|
||||||
|
* pointer, in which case we only check matching names and compatible kind
|
||||||
|
* - enums:
|
||||||
|
* - local variants have to have a match in target by symbolic name (but not
|
||||||
|
* numeric value)
|
||||||
|
* - size has to match (but enum may match enum64 and vice versa)
|
||||||
|
* - function pointers:
|
||||||
|
* - number and position of arguments in local type has to match target
|
||||||
|
* - for each argument and the return value we recursively check match
|
||||||
|
*/
|
||||||
|
int __bpf_core_types_match(const struct btf *local_btf, __u32 local_id, const struct btf *targ_btf,
|
||||||
|
__u32 targ_id, bool behind_ptr, int level)
|
||||||
|
{
|
||||||
|
const struct btf_type *local_t, *targ_t;
|
||||||
|
int depth = 32; /* max recursion depth */
|
||||||
|
__u16 local_k, targ_k;
|
||||||
|
|
||||||
|
if (level <= 0)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
local_t = btf_type_by_id(local_btf, local_id);
|
||||||
|
targ_t = btf_type_by_id(targ_btf, targ_id);
|
||||||
|
|
||||||
|
recur:
|
||||||
|
depth--;
|
||||||
|
if (depth < 0)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
local_t = skip_mods_and_typedefs(local_btf, local_id, &local_id);
|
||||||
|
targ_t = skip_mods_and_typedefs(targ_btf, targ_id, &targ_id);
|
||||||
|
if (!local_t || !targ_t)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
/* While the name check happens after typedefs are skipped, root-level
|
||||||
|
* typedefs would still be name-matched as that's the contract with
|
||||||
|
* callers.
|
||||||
|
*/
|
||||||
|
if (!bpf_core_names_match(local_btf, local_t->name_off, targ_btf, targ_t->name_off))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
local_k = btf_kind(local_t);
|
||||||
|
targ_k = btf_kind(targ_t);
|
||||||
|
|
||||||
|
switch (local_k) {
|
||||||
|
case BTF_KIND_UNKN:
|
||||||
|
return local_k == targ_k;
|
||||||
|
case BTF_KIND_FWD: {
|
||||||
|
bool local_f = BTF_INFO_KFLAG(local_t->info);
|
||||||
|
|
||||||
|
if (behind_ptr) {
|
||||||
|
if (local_k == targ_k)
|
||||||
|
return local_f == BTF_INFO_KFLAG(targ_t->info);
|
||||||
|
|
||||||
|
/* for forward declarations kflag dictates whether the
|
||||||
|
* target is a struct (0) or union (1)
|
||||||
|
*/
|
||||||
|
return (targ_k == BTF_KIND_STRUCT && !local_f) ||
|
||||||
|
(targ_k == BTF_KIND_UNION && local_f);
|
||||||
|
} else {
|
||||||
|
if (local_k != targ_k)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/* match if the forward declaration is for the same kind */
|
||||||
|
return local_f == BTF_INFO_KFLAG(targ_t->info);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case BTF_KIND_ENUM:
|
||||||
|
case BTF_KIND_ENUM64:
|
||||||
|
if (!btf_is_any_enum(targ_t))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return bpf_core_enums_match(local_btf, local_t, targ_btf, targ_t);
|
||||||
|
case BTF_KIND_STRUCT:
|
||||||
|
case BTF_KIND_UNION:
|
||||||
|
if (behind_ptr) {
|
||||||
|
bool targ_f = BTF_INFO_KFLAG(targ_t->info);
|
||||||
|
|
||||||
|
if (local_k == targ_k)
|
||||||
|
return 1;
|
||||||
|
|
||||||
|
if (targ_k != BTF_KIND_FWD)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return (local_k == BTF_KIND_UNION) == targ_f;
|
||||||
|
} else {
|
||||||
|
if (local_k != targ_k)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return bpf_core_composites_match(local_btf, local_t, targ_btf, targ_t,
|
||||||
|
behind_ptr, level);
|
||||||
|
}
|
||||||
|
case BTF_KIND_INT: {
|
||||||
|
__u8 local_sgn;
|
||||||
|
__u8 targ_sgn;
|
||||||
|
|
||||||
|
if (local_k != targ_k)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
local_sgn = btf_int_encoding(local_t) & BTF_INT_SIGNED;
|
||||||
|
targ_sgn = btf_int_encoding(targ_t) & BTF_INT_SIGNED;
|
||||||
|
|
||||||
|
return local_t->size == targ_t->size && local_sgn == targ_sgn;
|
||||||
|
}
|
||||||
|
case BTF_KIND_PTR:
|
||||||
|
if (local_k != targ_k)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
behind_ptr = true;
|
||||||
|
|
||||||
|
local_id = local_t->type;
|
||||||
|
targ_id = targ_t->type;
|
||||||
|
goto recur;
|
||||||
|
case BTF_KIND_ARRAY: {
|
||||||
|
const struct btf_array *local_array = btf_array(local_t);
|
||||||
|
const struct btf_array *targ_array = btf_array(targ_t);
|
||||||
|
|
||||||
|
if (local_k != targ_k)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (local_array->nelems != targ_array->nelems)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
local_id = local_array->type;
|
||||||
|
targ_id = targ_array->type;
|
||||||
|
goto recur;
|
||||||
|
}
|
||||||
|
case BTF_KIND_FUNC_PROTO: {
|
||||||
|
struct btf_param *local_p = btf_params(local_t);
|
||||||
|
struct btf_param *targ_p = btf_params(targ_t);
|
||||||
|
__u16 local_vlen = btf_vlen(local_t);
|
||||||
|
__u16 targ_vlen = btf_vlen(targ_t);
|
||||||
|
int i, err;
|
||||||
|
|
||||||
|
if (local_k != targ_k)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (local_vlen != targ_vlen)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
for (i = 0; i < local_vlen; i++, local_p++, targ_p++) {
|
||||||
|
err = __bpf_core_types_match(local_btf, local_p->type, targ_btf,
|
||||||
|
targ_p->type, behind_ptr, level - 1);
|
||||||
|
if (err <= 0)
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* tail recurse for return type check */
|
||||||
|
local_id = local_t->type;
|
||||||
|
targ_id = targ_t->type;
|
||||||
|
goto recur;
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
pr_warn("unexpected kind %s relocated, local [%d], target [%d]\n",
|
||||||
|
btf_kind_str(local_t), local_id, targ_id);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@ -44,14 +44,56 @@ struct bpf_core_spec {
|
|||||||
__u32 bit_offset;
|
__u32 bit_offset;
|
||||||
};
|
};
|
||||||
|
|
||||||
int bpf_core_apply_relo_insn(const char *prog_name,
|
struct bpf_core_relo_res {
|
||||||
struct bpf_insn *insn, int insn_idx,
|
/* expected value in the instruction, unless validate == false */
|
||||||
const struct bpf_core_relo *relo, int relo_idx,
|
__u64 orig_val;
|
||||||
const struct btf *local_btf,
|
/* new value that needs to be patched up to */
|
||||||
struct bpf_core_cand_list *cands,
|
__u64 new_val;
|
||||||
struct bpf_core_spec *specs_scratch);
|
/* relocation unsuccessful, poison instruction, but don't fail load */
|
||||||
|
bool poison;
|
||||||
|
/* some relocations can't be validated against orig_val */
|
||||||
|
bool validate;
|
||||||
|
/* for field byte offset relocations or the forms:
|
||||||
|
* *(T *)(rX + <off>) = rY
|
||||||
|
* rX = *(T *)(rY + <off>),
|
||||||
|
* we remember original and resolved field size to adjust direct
|
||||||
|
* memory loads of pointers and integers; this is necessary for 32-bit
|
||||||
|
* host kernel architectures, but also allows to automatically
|
||||||
|
* relocate fields that were resized from, e.g., u32 to u64, etc.
|
||||||
|
*/
|
||||||
|
bool fail_memsz_adjust;
|
||||||
|
__u32 orig_sz;
|
||||||
|
__u32 orig_type_id;
|
||||||
|
__u32 new_sz;
|
||||||
|
__u32 new_type_id;
|
||||||
|
};
|
||||||
|
|
||||||
|
int __bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id,
|
||||||
|
const struct btf *targ_btf, __u32 targ_id, int level);
|
||||||
int bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id,
|
int bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id,
|
||||||
const struct btf *targ_btf, __u32 targ_id);
|
const struct btf *targ_btf, __u32 targ_id);
|
||||||
|
int __bpf_core_types_match(const struct btf *local_btf, __u32 local_id, const struct btf *targ_btf,
|
||||||
|
__u32 targ_id, bool behind_ptr, int level);
|
||||||
|
int bpf_core_types_match(const struct btf *local_btf, __u32 local_id, const struct btf *targ_btf,
|
||||||
|
__u32 targ_id);
|
||||||
|
|
||||||
size_t bpf_core_essential_name_len(const char *name);
|
size_t bpf_core_essential_name_len(const char *name);
|
||||||
|
|
||||||
|
int bpf_core_calc_relo_insn(const char *prog_name,
|
||||||
|
const struct bpf_core_relo *relo, int relo_idx,
|
||||||
|
const struct btf *local_btf,
|
||||||
|
struct bpf_core_cand_list *cands,
|
||||||
|
struct bpf_core_spec *specs_scratch,
|
||||||
|
struct bpf_core_relo_res *targ_res);
|
||||||
|
|
||||||
|
int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
|
||||||
|
int insn_idx, const struct bpf_core_relo *relo,
|
||||||
|
int relo_idx, const struct bpf_core_relo_res *res);
|
||||||
|
|
||||||
|
int bpf_core_parse_spec(const char *prog_name, const struct btf *btf,
|
||||||
|
const struct bpf_core_relo *relo,
|
||||||
|
struct bpf_core_spec *spec);
|
||||||
|
|
||||||
|
int bpf_core_format_spec(char *buf, size_t buf_sz, const struct bpf_core_spec *spec);
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
297
src/ringbuf.c
@ -16,6 +16,7 @@
|
|||||||
#include <asm/barrier.h>
|
#include <asm/barrier.h>
|
||||||
#include <sys/mman.h>
|
#include <sys/mman.h>
|
||||||
#include <sys/epoll.h>
|
#include <sys/epoll.h>
|
||||||
|
#include <time.h>
|
||||||
|
|
||||||
#include "libbpf.h"
|
#include "libbpf.h"
|
||||||
#include "libbpf_internal.h"
|
#include "libbpf_internal.h"
|
||||||
@ -39,6 +40,23 @@ struct ring_buffer {
|
|||||||
int ring_cnt;
|
int ring_cnt;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
struct user_ring_buffer {
|
||||||
|
struct epoll_event event;
|
||||||
|
unsigned long *consumer_pos;
|
||||||
|
unsigned long *producer_pos;
|
||||||
|
void *data;
|
||||||
|
unsigned long mask;
|
||||||
|
size_t page_size;
|
||||||
|
int map_fd;
|
||||||
|
int epoll_fd;
|
||||||
|
};
|
||||||
|
|
||||||
|
/* 8-byte ring buffer header structure */
|
||||||
|
struct ringbuf_hdr {
|
||||||
|
__u32 len;
|
||||||
|
__u32 pad;
|
||||||
|
};
|
||||||
|
|
||||||
static void ringbuf_unmap_ring(struct ring_buffer *rb, struct ring *r)
|
static void ringbuf_unmap_ring(struct ring_buffer *rb, struct ring *r)
|
||||||
{
|
{
|
||||||
if (r->consumer_pos) {
|
if (r->consumer_pos) {
|
||||||
@ -59,6 +77,7 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
|
|||||||
__u32 len = sizeof(info);
|
__u32 len = sizeof(info);
|
||||||
struct epoll_event *e;
|
struct epoll_event *e;
|
||||||
struct ring *r;
|
struct ring *r;
|
||||||
|
__u64 mmap_sz;
|
||||||
void *tmp;
|
void *tmp;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
@ -97,8 +116,7 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
|
|||||||
r->mask = info.max_entries - 1;
|
r->mask = info.max_entries - 1;
|
||||||
|
|
||||||
/* Map writable consumer page */
|
/* Map writable consumer page */
|
||||||
tmp = mmap(NULL, rb->page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
|
tmp = mmap(NULL, rb->page_size, PROT_READ | PROT_WRITE, MAP_SHARED, map_fd, 0);
|
||||||
map_fd, 0);
|
|
||||||
if (tmp == MAP_FAILED) {
|
if (tmp == MAP_FAILED) {
|
||||||
err = -errno;
|
err = -errno;
|
||||||
pr_warn("ringbuf: failed to mmap consumer page for map fd=%d: %d\n",
|
pr_warn("ringbuf: failed to mmap consumer page for map fd=%d: %d\n",
|
||||||
@ -110,9 +128,13 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
|
|||||||
/* Map read-only producer page and data pages. We map twice as big
|
/* Map read-only producer page and data pages. We map twice as big
|
||||||
* data size to allow simple reading of samples that wrap around the
|
* data size to allow simple reading of samples that wrap around the
|
||||||
* end of a ring buffer. See kernel implementation for details.
|
* end of a ring buffer. See kernel implementation for details.
|
||||||
* */
|
*/
|
||||||
tmp = mmap(NULL, rb->page_size + 2 * info.max_entries, PROT_READ,
|
mmap_sz = rb->page_size + 2 * (__u64)info.max_entries;
|
||||||
MAP_SHARED, map_fd, rb->page_size);
|
if (mmap_sz != (__u64)(size_t)mmap_sz) {
|
||||||
|
pr_warn("ringbuf: ring buffer size (%u) is too big\n", info.max_entries);
|
||||||
|
return libbpf_err(-E2BIG);
|
||||||
|
}
|
||||||
|
tmp = mmap(NULL, (size_t)mmap_sz, PROT_READ, MAP_SHARED, map_fd, rb->page_size);
|
||||||
if (tmp == MAP_FAILED) {
|
if (tmp == MAP_FAILED) {
|
||||||
err = -errno;
|
err = -errno;
|
||||||
ringbuf_unmap_ring(rb, r);
|
ringbuf_unmap_ring(rb, r);
|
||||||
@ -202,7 +224,7 @@ static inline int roundup_len(__u32 len)
|
|||||||
return (len + 7) / 8 * 8;
|
return (len + 7) / 8 * 8;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int64_t ringbuf_process_ring(struct ring* r)
|
static int64_t ringbuf_process_ring(struct ring *r)
|
||||||
{
|
{
|
||||||
int *len_ptr, len, err;
|
int *len_ptr, len, err;
|
||||||
/* 64-bit to avoid overflow in case of extreme application behavior */
|
/* 64-bit to avoid overflow in case of extreme application behavior */
|
||||||
@ -300,3 +322,266 @@ int ring_buffer__epoll_fd(const struct ring_buffer *rb)
|
|||||||
{
|
{
|
||||||
return rb->epoll_fd;
|
return rb->epoll_fd;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void user_ringbuf_unmap_ring(struct user_ring_buffer *rb)
|
||||||
|
{
|
||||||
|
if (rb->consumer_pos) {
|
||||||
|
munmap(rb->consumer_pos, rb->page_size);
|
||||||
|
rb->consumer_pos = NULL;
|
||||||
|
}
|
||||||
|
if (rb->producer_pos) {
|
||||||
|
munmap(rb->producer_pos, rb->page_size + 2 * (rb->mask + 1));
|
||||||
|
rb->producer_pos = NULL;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void user_ring_buffer__free(struct user_ring_buffer *rb)
|
||||||
|
{
|
||||||
|
if (!rb)
|
||||||
|
return;
|
||||||
|
|
||||||
|
user_ringbuf_unmap_ring(rb);
|
||||||
|
|
||||||
|
if (rb->epoll_fd >= 0)
|
||||||
|
close(rb->epoll_fd);
|
||||||
|
|
||||||
|
free(rb);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int user_ringbuf_map(struct user_ring_buffer *rb, int map_fd)
|
||||||
|
{
|
||||||
|
struct bpf_map_info info;
|
||||||
|
__u32 len = sizeof(info);
|
||||||
|
__u64 mmap_sz;
|
||||||
|
void *tmp;
|
||||||
|
struct epoll_event *rb_epoll;
|
||||||
|
int err;
|
||||||
|
|
||||||
|
memset(&info, 0, sizeof(info));
|
||||||
|
|
||||||
|
err = bpf_obj_get_info_by_fd(map_fd, &info, &len);
|
||||||
|
if (err) {
|
||||||
|
err = -errno;
|
||||||
|
pr_warn("user ringbuf: failed to get map info for fd=%d: %d\n", map_fd, err);
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (info.type != BPF_MAP_TYPE_USER_RINGBUF) {
|
||||||
|
pr_warn("user ringbuf: map fd=%d is not BPF_MAP_TYPE_USER_RINGBUF\n", map_fd);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
rb->map_fd = map_fd;
|
||||||
|
rb->mask = info.max_entries - 1;
|
||||||
|
|
||||||
|
/* Map read-only consumer page */
|
||||||
|
tmp = mmap(NULL, rb->page_size, PROT_READ, MAP_SHARED, map_fd, 0);
|
||||||
|
if (tmp == MAP_FAILED) {
|
||||||
|
err = -errno;
|
||||||
|
pr_warn("user ringbuf: failed to mmap consumer page for map fd=%d: %d\n",
|
||||||
|
map_fd, err);
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
rb->consumer_pos = tmp;
|
||||||
|
|
||||||
|
/* Map read-write the producer page and data pages. We map the data
|
||||||
|
* region as twice the total size of the ring buffer to allow the
|
||||||
|
* simple reading and writing of samples that wrap around the end of
|
||||||
|
* the buffer. See the kernel implementation for details.
|
||||||
|
*/
|
||||||
|
mmap_sz = rb->page_size + 2 * (__u64)info.max_entries;
|
||||||
|
if (mmap_sz != (__u64)(size_t)mmap_sz) {
|
||||||
|
pr_warn("user ringbuf: ring buf size (%u) is too big\n", info.max_entries);
|
||||||
|
return -E2BIG;
|
||||||
|
}
|
||||||
|
tmp = mmap(NULL, (size_t)mmap_sz, PROT_READ | PROT_WRITE, MAP_SHARED,
|
||||||
|
map_fd, rb->page_size);
|
||||||
|
if (tmp == MAP_FAILED) {
|
||||||
|
err = -errno;
|
||||||
|
pr_warn("user ringbuf: failed to mmap data pages for map fd=%d: %d\n",
|
||||||
|
map_fd, err);
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
rb->producer_pos = tmp;
|
||||||
|
rb->data = tmp + rb->page_size;
|
||||||
|
|
||||||
|
rb_epoll = &rb->event;
|
||||||
|
rb_epoll->events = EPOLLOUT;
|
||||||
|
if (epoll_ctl(rb->epoll_fd, EPOLL_CTL_ADD, map_fd, rb_epoll) < 0) {
|
||||||
|
err = -errno;
|
||||||
|
pr_warn("user ringbuf: failed to epoll add map fd=%d: %d\n", map_fd, err);
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
struct user_ring_buffer *
|
||||||
|
user_ring_buffer__new(int map_fd, const struct user_ring_buffer_opts *opts)
|
||||||
|
{
|
||||||
|
struct user_ring_buffer *rb;
|
||||||
|
int err;
|
||||||
|
|
||||||
|
if (!OPTS_VALID(opts, user_ring_buffer_opts))
|
||||||
|
return errno = EINVAL, NULL;
|
||||||
|
|
||||||
|
rb = calloc(1, sizeof(*rb));
|
||||||
|
if (!rb)
|
||||||
|
return errno = ENOMEM, NULL;
|
||||||
|
|
||||||
|
rb->page_size = getpagesize();
|
||||||
|
|
||||||
|
rb->epoll_fd = epoll_create1(EPOLL_CLOEXEC);
|
||||||
|
if (rb->epoll_fd < 0) {
|
||||||
|
err = -errno;
|
||||||
|
pr_warn("user ringbuf: failed to create epoll instance: %d\n", err);
|
||||||
|
goto err_out;
|
||||||
|
}
|
||||||
|
|
||||||
|
err = user_ringbuf_map(rb, map_fd);
|
||||||
|
if (err)
|
||||||
|
goto err_out;
|
||||||
|
|
||||||
|
return rb;
|
||||||
|
|
||||||
|
err_out:
|
||||||
|
user_ring_buffer__free(rb);
|
||||||
|
return errno = -err, NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void user_ringbuf_commit(struct user_ring_buffer *rb, void *sample, bool discard)
|
||||||
|
{
|
||||||
|
__u32 new_len;
|
||||||
|
struct ringbuf_hdr *hdr;
|
||||||
|
uintptr_t hdr_offset;
|
||||||
|
|
||||||
|
hdr_offset = rb->mask + 1 + (sample - rb->data) - BPF_RINGBUF_HDR_SZ;
|
||||||
|
hdr = rb->data + (hdr_offset & rb->mask);
|
||||||
|
|
||||||
|
new_len = hdr->len & ~BPF_RINGBUF_BUSY_BIT;
|
||||||
|
if (discard)
|
||||||
|
new_len |= BPF_RINGBUF_DISCARD_BIT;
|
||||||
|
|
||||||
|
/* Synchronizes with smp_load_acquire() in __bpf_user_ringbuf_peek() in
|
||||||
|
* the kernel.
|
||||||
|
*/
|
||||||
|
__atomic_exchange_n(&hdr->len, new_len, __ATOMIC_ACQ_REL);
|
||||||
|
}
|
||||||
|
|
||||||
|
void user_ring_buffer__discard(struct user_ring_buffer *rb, void *sample)
|
||||||
|
{
|
||||||
|
user_ringbuf_commit(rb, sample, true);
|
||||||
|
}
|
||||||
|
|
||||||
|
void user_ring_buffer__submit(struct user_ring_buffer *rb, void *sample)
|
||||||
|
{
|
||||||
|
user_ringbuf_commit(rb, sample, false);
|
||||||
|
}
|
||||||
|
|
||||||
|
void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size)
|
||||||
|
{
|
||||||
|
__u32 avail_size, total_size, max_size;
|
||||||
|
/* 64-bit to avoid overflow in case of extreme application behavior */
|
||||||
|
__u64 cons_pos, prod_pos;
|
||||||
|
struct ringbuf_hdr *hdr;
|
||||||
|
|
||||||
|
/* The top two bits are used as special flags */
|
||||||
|
if (size & (BPF_RINGBUF_BUSY_BIT | BPF_RINGBUF_DISCARD_BIT))
|
||||||
|
return errno = E2BIG, NULL;
|
||||||
|
|
||||||
|
/* Synchronizes with smp_store_release() in __bpf_user_ringbuf_peek() in
|
||||||
|
* the kernel.
|
||||||
|
*/
|
||||||
|
cons_pos = smp_load_acquire(rb->consumer_pos);
|
||||||
|
/* Synchronizes with smp_store_release() in user_ringbuf_commit() */
|
||||||
|
prod_pos = smp_load_acquire(rb->producer_pos);
|
||||||
|
|
||||||
|
max_size = rb->mask + 1;
|
||||||
|
avail_size = max_size - (prod_pos - cons_pos);
|
||||||
|
/* Round up total size to a multiple of 8. */
|
||||||
|
total_size = (size + BPF_RINGBUF_HDR_SZ + 7) / 8 * 8;
|
||||||
|
|
||||||
|
if (total_size > max_size)
|
||||||
|
return errno = E2BIG, NULL;
|
||||||
|
|
||||||
|
if (avail_size < total_size)
|
||||||
|
return errno = ENOSPC, NULL;
|
||||||
|
|
||||||
|
hdr = rb->data + (prod_pos & rb->mask);
|
||||||
|
hdr->len = size | BPF_RINGBUF_BUSY_BIT;
|
||||||
|
hdr->pad = 0;
|
||||||
|
|
||||||
|
/* Synchronizes with smp_load_acquire() in __bpf_user_ringbuf_peek() in
|
||||||
|
* the kernel.
|
||||||
|
*/
|
||||||
|
smp_store_release(rb->producer_pos, prod_pos + total_size);
|
||||||
|
|
||||||
|
return (void *)rb->data + ((prod_pos + BPF_RINGBUF_HDR_SZ) & rb->mask);
|
||||||
|
}
|
||||||
|
|
||||||
|
static __u64 ns_elapsed_timespec(const struct timespec *start, const struct timespec *end)
|
||||||
|
{
|
||||||
|
__u64 start_ns, end_ns, ns_per_s = 1000000000;
|
||||||
|
|
||||||
|
start_ns = (__u64)start->tv_sec * ns_per_s + start->tv_nsec;
|
||||||
|
end_ns = (__u64)end->tv_sec * ns_per_s + end->tv_nsec;
|
||||||
|
|
||||||
|
return end_ns - start_ns;
|
||||||
|
}
|
||||||
|
|
||||||
|
void *user_ring_buffer__reserve_blocking(struct user_ring_buffer *rb, __u32 size, int timeout_ms)
|
||||||
|
{
|
||||||
|
void *sample;
|
||||||
|
int err, ms_remaining = timeout_ms;
|
||||||
|
struct timespec start;
|
||||||
|
|
||||||
|
if (timeout_ms < 0 && timeout_ms != -1)
|
||||||
|
return errno = EINVAL, NULL;
|
||||||
|
|
||||||
|
if (timeout_ms != -1) {
|
||||||
|
err = clock_gettime(CLOCK_MONOTONIC, &start);
|
||||||
|
if (err)
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
do {
|
||||||
|
int cnt, ms_elapsed;
|
||||||
|
struct timespec curr;
|
||||||
|
__u64 ns_per_ms = 1000000;
|
||||||
|
|
||||||
|
sample = user_ring_buffer__reserve(rb, size);
|
||||||
|
if (sample)
|
||||||
|
return sample;
|
||||||
|
else if (errno != ENOSPC)
|
||||||
|
return NULL;
|
||||||
|
|
||||||
|
/* The kernel guarantees at least one event notification
|
||||||
|
* delivery whenever at least one sample is drained from the
|
||||||
|
* ring buffer in an invocation to bpf_ringbuf_drain(). Other
|
||||||
|
* additional events may be delivered at any time, but only one
|
||||||
|
* event is guaranteed per bpf_ringbuf_drain() invocation,
|
||||||
|
* provided that a sample is drained, and the BPF program did
|
||||||
|
* not pass BPF_RB_NO_WAKEUP to bpf_ringbuf_drain(). If
|
||||||
|
* BPF_RB_FORCE_WAKEUP is passed to bpf_ringbuf_drain(), a
|
||||||
|
* wakeup event will be delivered even if no samples are
|
||||||
|
* drained.
|
||||||
|
*/
|
||||||
|
cnt = epoll_wait(rb->epoll_fd, &rb->event, 1, ms_remaining);
|
||||||
|
if (cnt < 0)
|
||||||
|
return NULL;
|
||||||
|
|
||||||
|
if (timeout_ms == -1)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
err = clock_gettime(CLOCK_MONOTONIC, &curr);
|
||||||
|
if (err)
|
||||||
|
return NULL;
|
||||||
|
|
||||||
|
ms_elapsed = ns_elapsed_timespec(&start, &curr) / ns_per_ms;
|
||||||
|
ms_remaining = timeout_ms - ms_elapsed;
|
||||||
|
} while (ms_remaining > 0);
|
||||||
|
|
||||||
|
/* Try one more time to reserve a sample after the specified timeout has elapsed. */
|
||||||
|
return user_ring_buffer__reserve(rb, size);
|
||||||
|
}
|
||||||
|
@ -3,9 +3,19 @@
|
|||||||
#ifndef __SKEL_INTERNAL_H
|
#ifndef __SKEL_INTERNAL_H
|
||||||
#define __SKEL_INTERNAL_H
|
#define __SKEL_INTERNAL_H
|
||||||
|
|
||||||
|
#ifdef __KERNEL__
|
||||||
|
#include <linux/fdtable.h>
|
||||||
|
#include <linux/mm.h>
|
||||||
|
#include <linux/mman.h>
|
||||||
|
#include <linux/slab.h>
|
||||||
|
#include <linux/bpf.h>
|
||||||
|
#else
|
||||||
#include <unistd.h>
|
#include <unistd.h>
|
||||||
#include <sys/syscall.h>
|
#include <sys/syscall.h>
|
||||||
#include <sys/mman.h>
|
#include <sys/mman.h>
|
||||||
|
#include <stdlib.h>
|
||||||
|
#include "bpf.h"
|
||||||
|
#endif
|
||||||
|
|
||||||
#ifndef __NR_bpf
|
#ifndef __NR_bpf
|
||||||
# if defined(__mips__) && defined(_ABIO32)
|
# if defined(__mips__) && defined(_ABIO32)
|
||||||
@ -25,24 +35,23 @@
|
|||||||
* requested during loader program generation.
|
* requested during loader program generation.
|
||||||
*/
|
*/
|
||||||
struct bpf_map_desc {
|
struct bpf_map_desc {
|
||||||
union {
|
/* output of the loader prog */
|
||||||
/* input for the loader prog */
|
int map_fd;
|
||||||
struct {
|
/* input for the loader prog */
|
||||||
__aligned_u64 initial_value;
|
__u32 max_entries;
|
||||||
__u32 max_entries;
|
__aligned_u64 initial_value;
|
||||||
};
|
|
||||||
/* output of the loader prog */
|
|
||||||
struct {
|
|
||||||
int map_fd;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
};
|
||||||
struct bpf_prog_desc {
|
struct bpf_prog_desc {
|
||||||
int prog_fd;
|
int prog_fd;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
enum {
|
||||||
|
BPF_SKEL_KERNEL = (1ULL << 0),
|
||||||
|
};
|
||||||
|
|
||||||
struct bpf_loader_ctx {
|
struct bpf_loader_ctx {
|
||||||
size_t sz;
|
__u32 sz;
|
||||||
|
__u32 flags;
|
||||||
__u32 log_level;
|
__u32 log_level;
|
||||||
__u32 log_size;
|
__u32 log_size;
|
||||||
__u64 log_buf;
|
__u64 log_buf;
|
||||||
@ -57,12 +66,144 @@ struct bpf_load_and_run_opts {
|
|||||||
const char *errstr;
|
const char *errstr;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
long kern_sys_bpf(__u32 cmd, void *attr, __u32 attr_size);
|
||||||
|
|
||||||
static inline int skel_sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
|
static inline int skel_sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
|
||||||
unsigned int size)
|
unsigned int size)
|
||||||
{
|
{
|
||||||
|
#ifdef __KERNEL__
|
||||||
|
return kern_sys_bpf(cmd, attr, size);
|
||||||
|
#else
|
||||||
return syscall(__NR_bpf, cmd, attr, size);
|
return syscall(__NR_bpf, cmd, attr, size);
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef __KERNEL__
|
||||||
|
static inline int close(int fd)
|
||||||
|
{
|
||||||
|
return close_fd(fd);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void *skel_alloc(size_t size)
|
||||||
|
{
|
||||||
|
struct bpf_loader_ctx *ctx = kzalloc(size, GFP_KERNEL);
|
||||||
|
|
||||||
|
if (!ctx)
|
||||||
|
return NULL;
|
||||||
|
ctx->flags |= BPF_SKEL_KERNEL;
|
||||||
|
return ctx;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void skel_free(const void *p)
|
||||||
|
{
|
||||||
|
kfree(p);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* skel->bss/rodata maps are populated the following way:
|
||||||
|
*
|
||||||
|
* For kernel use:
|
||||||
|
* skel_prep_map_data() allocates kernel memory that kernel module can directly access.
|
||||||
|
* Generated lskel stores the pointer in skel->rodata and in skel->maps.rodata.initial_value.
|
||||||
|
* The loader program will perform probe_read_kernel() from maps.rodata.initial_value.
|
||||||
|
* skel_finalize_map_data() sets skel->rodata to point to actual value in a bpf map and
|
||||||
|
* does maps.rodata.initial_value = ~0ULL to signal skel_free_map_data() that kvfree
|
||||||
|
* is not nessary.
|
||||||
|
*
|
||||||
|
* For user space:
|
||||||
|
* skel_prep_map_data() mmaps anon memory into skel->rodata that can be accessed directly.
|
||||||
|
* Generated lskel stores the pointer in skel->rodata and in skel->maps.rodata.initial_value.
|
||||||
|
* The loader program will perform copy_from_user() from maps.rodata.initial_value.
|
||||||
|
* skel_finalize_map_data() remaps bpf array map value from the kernel memory into
|
||||||
|
* skel->rodata address.
|
||||||
|
*
|
||||||
|
* The "bpftool gen skeleton -L" command generates lskel.h that is suitable for
|
||||||
|
* both kernel and user space. The generated loader program does
|
||||||
|
* either bpf_probe_read_kernel() or bpf_copy_from_user() from initial_value
|
||||||
|
* depending on bpf_loader_ctx->flags.
|
||||||
|
*/
|
||||||
|
static inline void skel_free_map_data(void *p, __u64 addr, size_t sz)
|
||||||
|
{
|
||||||
|
if (addr != ~0ULL)
|
||||||
|
kvfree(p);
|
||||||
|
/* When addr == ~0ULL the 'p' points to
|
||||||
|
* ((struct bpf_array *)map)->value. See skel_finalize_map_data.
|
||||||
|
*/
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void *skel_prep_map_data(const void *val, size_t mmap_sz, size_t val_sz)
|
||||||
|
{
|
||||||
|
void *addr;
|
||||||
|
|
||||||
|
addr = kvmalloc(val_sz, GFP_KERNEL);
|
||||||
|
if (!addr)
|
||||||
|
return NULL;
|
||||||
|
memcpy(addr, val, val_sz);
|
||||||
|
return addr;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void *skel_finalize_map_data(__u64 *init_val, size_t mmap_sz, int flags, int fd)
|
||||||
|
{
|
||||||
|
struct bpf_map *map;
|
||||||
|
void *addr = NULL;
|
||||||
|
|
||||||
|
kvfree((void *) (long) *init_val);
|
||||||
|
*init_val = ~0ULL;
|
||||||
|
|
||||||
|
/* At this point bpf_load_and_run() finished without error and
|
||||||
|
* 'fd' is a valid bpf map FD. All sanity checks below should succeed.
|
||||||
|
*/
|
||||||
|
map = bpf_map_get(fd);
|
||||||
|
if (IS_ERR(map))
|
||||||
|
return NULL;
|
||||||
|
if (map->map_type != BPF_MAP_TYPE_ARRAY)
|
||||||
|
goto out;
|
||||||
|
addr = ((struct bpf_array *)map)->value;
|
||||||
|
/* the addr stays valid, since FD is not closed */
|
||||||
|
out:
|
||||||
|
bpf_map_put(map);
|
||||||
|
return addr;
|
||||||
|
}
|
||||||
|
|
||||||
|
#else
|
||||||
|
|
||||||
|
static inline void *skel_alloc(size_t size)
|
||||||
|
{
|
||||||
|
return calloc(1, size);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void skel_free(void *p)
|
||||||
|
{
|
||||||
|
free(p);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void skel_free_map_data(void *p, __u64 addr, size_t sz)
|
||||||
|
{
|
||||||
|
munmap(p, sz);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void *skel_prep_map_data(const void *val, size_t mmap_sz, size_t val_sz)
|
||||||
|
{
|
||||||
|
void *addr;
|
||||||
|
|
||||||
|
addr = mmap(NULL, mmap_sz, PROT_READ | PROT_WRITE,
|
||||||
|
MAP_SHARED | MAP_ANONYMOUS, -1, 0);
|
||||||
|
if (addr == (void *) -1)
|
||||||
|
return NULL;
|
||||||
|
memcpy(addr, val, val_sz);
|
||||||
|
return addr;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void *skel_finalize_map_data(__u64 *init_val, size_t mmap_sz, int flags, int fd)
|
||||||
|
{
|
||||||
|
void *addr;
|
||||||
|
|
||||||
|
addr = mmap((void *) (long) *init_val, mmap_sz, flags, MAP_SHARED | MAP_FIXED, fd, 0);
|
||||||
|
if (addr == (void *) -1)
|
||||||
|
return NULL;
|
||||||
|
return addr;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
static inline int skel_closenz(int fd)
|
static inline int skel_closenz(int fd)
|
||||||
{
|
{
|
||||||
if (fd > 0)
|
if (fd > 0)
|
||||||
@ -110,6 +251,29 @@ static inline int skel_map_update_elem(int fd, const void *key,
|
|||||||
return skel_sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, attr_sz);
|
return skel_sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, attr_sz);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline int skel_map_delete_elem(int fd, const void *key)
|
||||||
|
{
|
||||||
|
const size_t attr_sz = offsetofend(union bpf_attr, flags);
|
||||||
|
union bpf_attr attr;
|
||||||
|
|
||||||
|
memset(&attr, 0, attr_sz);
|
||||||
|
attr.map_fd = fd;
|
||||||
|
attr.key = (long)key;
|
||||||
|
|
||||||
|
return skel_sys_bpf(BPF_MAP_DELETE_ELEM, &attr, attr_sz);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline int skel_map_get_fd_by_id(__u32 id)
|
||||||
|
{
|
||||||
|
const size_t attr_sz = offsetofend(union bpf_attr, flags);
|
||||||
|
union bpf_attr attr;
|
||||||
|
|
||||||
|
memset(&attr, 0, attr_sz);
|
||||||
|
attr.map_id = id;
|
||||||
|
|
||||||
|
return skel_sys_bpf(BPF_MAP_GET_FD_BY_ID, &attr, attr_sz);
|
||||||
|
}
|
||||||
|
|
||||||
static inline int skel_raw_tracepoint_open(const char *name, int prog_fd)
|
static inline int skel_raw_tracepoint_open(const char *name, int prog_fd)
|
||||||
{
|
{
|
||||||
const size_t attr_sz = offsetofend(union bpf_attr, raw_tracepoint.prog_fd);
|
const size_t attr_sz = offsetofend(union bpf_attr, raw_tracepoint.prog_fd);
|
||||||
@ -136,26 +300,34 @@ static inline int skel_link_create(int prog_fd, int target_fd,
|
|||||||
return skel_sys_bpf(BPF_LINK_CREATE, &attr, attr_sz);
|
return skel_sys_bpf(BPF_LINK_CREATE, &attr, attr_sz);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef __KERNEL__
|
||||||
|
#define set_err
|
||||||
|
#else
|
||||||
|
#define set_err err = -errno
|
||||||
|
#endif
|
||||||
|
|
||||||
static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
|
static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
|
||||||
{
|
{
|
||||||
|
const size_t prog_load_attr_sz = offsetofend(union bpf_attr, fd_array);
|
||||||
|
const size_t test_run_attr_sz = offsetofend(union bpf_attr, test);
|
||||||
int map_fd = -1, prog_fd = -1, key = 0, err;
|
int map_fd = -1, prog_fd = -1, key = 0, err;
|
||||||
union bpf_attr attr;
|
union bpf_attr attr;
|
||||||
|
|
||||||
map_fd = skel_map_create(BPF_MAP_TYPE_ARRAY, "__loader.map", 4, opts->data_sz, 1);
|
err = map_fd = skel_map_create(BPF_MAP_TYPE_ARRAY, "__loader.map", 4, opts->data_sz, 1);
|
||||||
if (map_fd < 0) {
|
if (map_fd < 0) {
|
||||||
opts->errstr = "failed to create loader map";
|
opts->errstr = "failed to create loader map";
|
||||||
err = -errno;
|
set_err;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
err = skel_map_update_elem(map_fd, &key, opts->data, 0);
|
err = skel_map_update_elem(map_fd, &key, opts->data, 0);
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
opts->errstr = "failed to update loader map";
|
opts->errstr = "failed to update loader map";
|
||||||
err = -errno;
|
set_err;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
memset(&attr, 0, sizeof(attr));
|
memset(&attr, 0, prog_load_attr_sz);
|
||||||
attr.prog_type = BPF_PROG_TYPE_SYSCALL;
|
attr.prog_type = BPF_PROG_TYPE_SYSCALL;
|
||||||
attr.insns = (long) opts->insns;
|
attr.insns = (long) opts->insns;
|
||||||
attr.insn_cnt = opts->insns_sz / sizeof(struct bpf_insn);
|
attr.insn_cnt = opts->insns_sz / sizeof(struct bpf_insn);
|
||||||
@ -166,25 +338,27 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
|
|||||||
attr.log_size = opts->ctx->log_size;
|
attr.log_size = opts->ctx->log_size;
|
||||||
attr.log_buf = opts->ctx->log_buf;
|
attr.log_buf = opts->ctx->log_buf;
|
||||||
attr.prog_flags = BPF_F_SLEEPABLE;
|
attr.prog_flags = BPF_F_SLEEPABLE;
|
||||||
prog_fd = skel_sys_bpf(BPF_PROG_LOAD, &attr, sizeof(attr));
|
err = prog_fd = skel_sys_bpf(BPF_PROG_LOAD, &attr, prog_load_attr_sz);
|
||||||
if (prog_fd < 0) {
|
if (prog_fd < 0) {
|
||||||
opts->errstr = "failed to load loader prog";
|
opts->errstr = "failed to load loader prog";
|
||||||
err = -errno;
|
set_err;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
memset(&attr, 0, sizeof(attr));
|
memset(&attr, 0, test_run_attr_sz);
|
||||||
attr.test.prog_fd = prog_fd;
|
attr.test.prog_fd = prog_fd;
|
||||||
attr.test.ctx_in = (long) opts->ctx;
|
attr.test.ctx_in = (long) opts->ctx;
|
||||||
attr.test.ctx_size_in = opts->ctx->sz;
|
attr.test.ctx_size_in = opts->ctx->sz;
|
||||||
err = skel_sys_bpf(BPF_PROG_RUN, &attr, sizeof(attr));
|
err = skel_sys_bpf(BPF_PROG_RUN, &attr, test_run_attr_sz);
|
||||||
if (err < 0 || (int)attr.test.retval < 0) {
|
if (err < 0 || (int)attr.test.retval < 0) {
|
||||||
opts->errstr = "failed to execute loader prog";
|
opts->errstr = "failed to execute loader prog";
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
err = -errno;
|
set_err;
|
||||||
} else {
|
} else {
|
||||||
err = (int)attr.test.retval;
|
err = (int)attr.test.retval;
|
||||||
|
#ifndef __KERNEL__
|
||||||
errno = -err;
|
errno = -err;
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
18
src/strset.c
@ -19,19 +19,19 @@ struct strset {
|
|||||||
struct hashmap *strs_hash;
|
struct hashmap *strs_hash;
|
||||||
};
|
};
|
||||||
|
|
||||||
static size_t strset_hash_fn(const void *key, void *ctx)
|
static size_t strset_hash_fn(long key, void *ctx)
|
||||||
{
|
{
|
||||||
const struct strset *s = ctx;
|
const struct strset *s = ctx;
|
||||||
const char *str = s->strs_data + (long)key;
|
const char *str = s->strs_data + key;
|
||||||
|
|
||||||
return str_hash(str);
|
return str_hash(str);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool strset_equal_fn(const void *key1, const void *key2, void *ctx)
|
static bool strset_equal_fn(long key1, long key2, void *ctx)
|
||||||
{
|
{
|
||||||
const struct strset *s = ctx;
|
const struct strset *s = ctx;
|
||||||
const char *str1 = s->strs_data + (long)key1;
|
const char *str1 = s->strs_data + key1;
|
||||||
const char *str2 = s->strs_data + (long)key2;
|
const char *str2 = s->strs_data + key2;
|
||||||
|
|
||||||
return strcmp(str1, str2) == 0;
|
return strcmp(str1, str2) == 0;
|
||||||
}
|
}
|
||||||
@ -67,7 +67,7 @@ struct strset *strset__new(size_t max_data_sz, const char *init_data, size_t ini
|
|||||||
/* hashmap__add() returns EEXIST if string with the same
|
/* hashmap__add() returns EEXIST if string with the same
|
||||||
* content already is in the hash map
|
* content already is in the hash map
|
||||||
*/
|
*/
|
||||||
err = hashmap__add(hash, (void *)off, (void *)off);
|
err = hashmap__add(hash, off, off);
|
||||||
if (err == -EEXIST)
|
if (err == -EEXIST)
|
||||||
continue; /* duplicate */
|
continue; /* duplicate */
|
||||||
if (err)
|
if (err)
|
||||||
@ -127,7 +127,7 @@ int strset__find_str(struct strset *set, const char *s)
|
|||||||
new_off = set->strs_data_len;
|
new_off = set->strs_data_len;
|
||||||
memcpy(p, s, len);
|
memcpy(p, s, len);
|
||||||
|
|
||||||
if (hashmap__find(set->strs_hash, (void *)new_off, (void **)&old_off))
|
if (hashmap__find(set->strs_hash, new_off, &old_off))
|
||||||
return old_off;
|
return old_off;
|
||||||
|
|
||||||
return -ENOENT;
|
return -ENOENT;
|
||||||
@ -165,8 +165,8 @@ int strset__add_str(struct strset *set, const char *s)
|
|||||||
* contents doesn't exist already (HASHMAP_ADD strategy). If such
|
* contents doesn't exist already (HASHMAP_ADD strategy). If such
|
||||||
* string exists, we'll get its offset in old_off (that's old_key).
|
* string exists, we'll get its offset in old_off (that's old_key).
|
||||||
*/
|
*/
|
||||||
err = hashmap__insert(set->strs_hash, (void *)new_off, (void *)new_off,
|
err = hashmap__insert(set->strs_hash, new_off, new_off,
|
||||||
HASHMAP_ADD, (const void **)&old_off, NULL);
|
HASHMAP_ADD, &old_off, NULL);
|
||||||
if (err == -EEXIST)
|
if (err == -EEXIST)
|
||||||
return old_off; /* duplicated string, return existing offset */
|
return old_off; /* duplicated string, return existing offset */
|
||||||
if (err)
|
if (err)
|
||||||
|
247
src/usdt.bpf.h
Normal file
@ -0,0 +1,247 @@
|
|||||||
|
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
|
||||||
|
/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
|
||||||
|
#ifndef __USDT_BPF_H__
|
||||||
|
#define __USDT_BPF_H__
|
||||||
|
|
||||||
|
#include <linux/errno.h>
|
||||||
|
#include <bpf/bpf_helpers.h>
|
||||||
|
#include <bpf/bpf_tracing.h>
|
||||||
|
|
||||||
|
/* Below types and maps are internal implementation details of libbpf's USDT
|
||||||
|
* support and are subjects to change. Also, bpf_usdt_xxx() API helpers should
|
||||||
|
* be considered an unstable API as well and might be adjusted based on user
|
||||||
|
* feedback from using libbpf's USDT support in production.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/* User can override BPF_USDT_MAX_SPEC_CNT to change default size of internal
|
||||||
|
* map that keeps track of USDT argument specifications. This might be
|
||||||
|
* necessary if there are a lot of USDT attachments.
|
||||||
|
*/
|
||||||
|
#ifndef BPF_USDT_MAX_SPEC_CNT
|
||||||
|
#define BPF_USDT_MAX_SPEC_CNT 256
|
||||||
|
#endif
|
||||||
|
/* User can override BPF_USDT_MAX_IP_CNT to change default size of internal
|
||||||
|
* map that keeps track of IP (memory address) mapping to USDT argument
|
||||||
|
* specification.
|
||||||
|
* Note, if kernel supports BPF cookies, this map is not used and could be
|
||||||
|
* resized all the way to 1 to save a bit of memory.
|
||||||
|
*/
|
||||||
|
#ifndef BPF_USDT_MAX_IP_CNT
|
||||||
|
#define BPF_USDT_MAX_IP_CNT (4 * BPF_USDT_MAX_SPEC_CNT)
|
||||||
|
#endif
|
||||||
|
|
||||||
|
enum __bpf_usdt_arg_type {
|
||||||
|
BPF_USDT_ARG_CONST,
|
||||||
|
BPF_USDT_ARG_REG,
|
||||||
|
BPF_USDT_ARG_REG_DEREF,
|
||||||
|
};
|
||||||
|
|
||||||
|
struct __bpf_usdt_arg_spec {
|
||||||
|
/* u64 scalar interpreted depending on arg_type, see below */
|
||||||
|
__u64 val_off;
|
||||||
|
/* arg location case, see bpf_udst_arg() for details */
|
||||||
|
enum __bpf_usdt_arg_type arg_type;
|
||||||
|
/* offset of referenced register within struct pt_regs */
|
||||||
|
short reg_off;
|
||||||
|
/* whether arg should be interpreted as signed value */
|
||||||
|
bool arg_signed;
|
||||||
|
/* number of bits that need to be cleared and, optionally,
|
||||||
|
* sign-extended to cast arguments that are 1, 2, or 4 bytes
|
||||||
|
* long into final 8-byte u64/s64 value returned to user
|
||||||
|
*/
|
||||||
|
char arg_bitshift;
|
||||||
|
};
|
||||||
|
|
||||||
|
/* should match USDT_MAX_ARG_CNT in usdt.c exactly */
|
||||||
|
#define BPF_USDT_MAX_ARG_CNT 12
|
||||||
|
struct __bpf_usdt_spec {
|
||||||
|
struct __bpf_usdt_arg_spec args[BPF_USDT_MAX_ARG_CNT];
|
||||||
|
__u64 usdt_cookie;
|
||||||
|
short arg_cnt;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct {
|
||||||
|
__uint(type, BPF_MAP_TYPE_ARRAY);
|
||||||
|
__uint(max_entries, BPF_USDT_MAX_SPEC_CNT);
|
||||||
|
__type(key, int);
|
||||||
|
__type(value, struct __bpf_usdt_spec);
|
||||||
|
} __bpf_usdt_specs SEC(".maps") __weak;
|
||||||
|
|
||||||
|
struct {
|
||||||
|
__uint(type, BPF_MAP_TYPE_HASH);
|
||||||
|
__uint(max_entries, BPF_USDT_MAX_IP_CNT);
|
||||||
|
__type(key, long);
|
||||||
|
__type(value, __u32);
|
||||||
|
} __bpf_usdt_ip_to_spec_id SEC(".maps") __weak;
|
||||||
|
|
||||||
|
extern const _Bool LINUX_HAS_BPF_COOKIE __kconfig;
|
||||||
|
|
||||||
|
static __always_inline
|
||||||
|
int __bpf_usdt_spec_id(struct pt_regs *ctx)
|
||||||
|
{
|
||||||
|
if (!LINUX_HAS_BPF_COOKIE) {
|
||||||
|
long ip = PT_REGS_IP(ctx);
|
||||||
|
int *spec_id_ptr;
|
||||||
|
|
||||||
|
spec_id_ptr = bpf_map_lookup_elem(&__bpf_usdt_ip_to_spec_id, &ip);
|
||||||
|
return spec_id_ptr ? *spec_id_ptr : -ESRCH;
|
||||||
|
}
|
||||||
|
|
||||||
|
return bpf_get_attach_cookie(ctx);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Return number of USDT arguments defined for currently traced USDT. */
|
||||||
|
__weak __hidden
|
||||||
|
int bpf_usdt_arg_cnt(struct pt_regs *ctx)
|
||||||
|
{
|
||||||
|
struct __bpf_usdt_spec *spec;
|
||||||
|
int spec_id;
|
||||||
|
|
||||||
|
spec_id = __bpf_usdt_spec_id(ctx);
|
||||||
|
if (spec_id < 0)
|
||||||
|
return -ESRCH;
|
||||||
|
|
||||||
|
spec = bpf_map_lookup_elem(&__bpf_usdt_specs, &spec_id);
|
||||||
|
if (!spec)
|
||||||
|
return -ESRCH;
|
||||||
|
|
||||||
|
return spec->arg_cnt;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Fetch USDT argument #*arg_num* (zero-indexed) and put its value into *res.
|
||||||
|
* Returns 0 on success; negative error, otherwise.
|
||||||
|
* On error *res is guaranteed to be set to zero.
|
||||||
|
*/
|
||||||
|
__weak __hidden
|
||||||
|
int bpf_usdt_arg(struct pt_regs *ctx, __u64 arg_num, long *res)
|
||||||
|
{
|
||||||
|
struct __bpf_usdt_spec *spec;
|
||||||
|
struct __bpf_usdt_arg_spec *arg_spec;
|
||||||
|
unsigned long val;
|
||||||
|
int err, spec_id;
|
||||||
|
|
||||||
|
*res = 0;
|
||||||
|
|
||||||
|
spec_id = __bpf_usdt_spec_id(ctx);
|
||||||
|
if (spec_id < 0)
|
||||||
|
return -ESRCH;
|
||||||
|
|
||||||
|
spec = bpf_map_lookup_elem(&__bpf_usdt_specs, &spec_id);
|
||||||
|
if (!spec)
|
||||||
|
return -ESRCH;
|
||||||
|
|
||||||
|
if (arg_num >= BPF_USDT_MAX_ARG_CNT || arg_num >= spec->arg_cnt)
|
||||||
|
return -ENOENT;
|
||||||
|
|
||||||
|
arg_spec = &spec->args[arg_num];
|
||||||
|
switch (arg_spec->arg_type) {
|
||||||
|
case BPF_USDT_ARG_CONST:
|
||||||
|
/* Arg is just a constant ("-4@$-9" in USDT arg spec).
|
||||||
|
* value is recorded in arg_spec->val_off directly.
|
||||||
|
*/
|
||||||
|
val = arg_spec->val_off;
|
||||||
|
break;
|
||||||
|
case BPF_USDT_ARG_REG:
|
||||||
|
/* Arg is in a register (e.g, "8@%rax" in USDT arg spec),
|
||||||
|
* so we read the contents of that register directly from
|
||||||
|
* struct pt_regs. To keep things simple user-space parts
|
||||||
|
* record offsetof(struct pt_regs, <regname>) in arg_spec->reg_off.
|
||||||
|
*/
|
||||||
|
err = bpf_probe_read_kernel(&val, sizeof(val), (void *)ctx + arg_spec->reg_off);
|
||||||
|
if (err)
|
||||||
|
return err;
|
||||||
|
break;
|
||||||
|
case BPF_USDT_ARG_REG_DEREF:
|
||||||
|
/* Arg is in memory addressed by register, plus some offset
|
||||||
|
* (e.g., "-4@-1204(%rbp)" in USDT arg spec). Register is
|
||||||
|
* identified like with BPF_USDT_ARG_REG case, and the offset
|
||||||
|
* is in arg_spec->val_off. We first fetch register contents
|
||||||
|
* from pt_regs, then do another user-space probe read to
|
||||||
|
* fetch argument value itself.
|
||||||
|
*/
|
||||||
|
err = bpf_probe_read_kernel(&val, sizeof(val), (void *)ctx + arg_spec->reg_off);
|
||||||
|
if (err)
|
||||||
|
return err;
|
||||||
|
err = bpf_probe_read_user(&val, sizeof(val), (void *)val + arg_spec->val_off);
|
||||||
|
if (err)
|
||||||
|
return err;
|
||||||
|
#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
|
||||||
|
val >>= arg_spec->arg_bitshift;
|
||||||
|
#endif
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* cast arg from 1, 2, or 4 bytes to final 8 byte size clearing
|
||||||
|
* necessary upper arg_bitshift bits, with sign extension if argument
|
||||||
|
* is signed
|
||||||
|
*/
|
||||||
|
val <<= arg_spec->arg_bitshift;
|
||||||
|
if (arg_spec->arg_signed)
|
||||||
|
val = ((long)val) >> arg_spec->arg_bitshift;
|
||||||
|
else
|
||||||
|
val = val >> arg_spec->arg_bitshift;
|
||||||
|
*res = val;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Retrieve user-specified cookie value provided during attach as
|
||||||
|
* bpf_usdt_opts.usdt_cookie. This serves the same purpose as BPF cookie
|
||||||
|
* returned by bpf_get_attach_cookie(). Libbpf's support for USDT is itself
|
||||||
|
* utilizing BPF cookies internally, so user can't use BPF cookie directly
|
||||||
|
* for USDT programs and has to use bpf_usdt_cookie() API instead.
|
||||||
|
*/
|
||||||
|
__weak __hidden
|
||||||
|
long bpf_usdt_cookie(struct pt_regs *ctx)
|
||||||
|
{
|
||||||
|
struct __bpf_usdt_spec *spec;
|
||||||
|
int spec_id;
|
||||||
|
|
||||||
|
spec_id = __bpf_usdt_spec_id(ctx);
|
||||||
|
if (spec_id < 0)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
spec = bpf_map_lookup_elem(&__bpf_usdt_specs, &spec_id);
|
||||||
|
if (!spec)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return spec->usdt_cookie;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* we rely on ___bpf_apply() and ___bpf_narg() macros already defined in bpf_tracing.h */
|
||||||
|
#define ___bpf_usdt_args0() ctx
|
||||||
|
#define ___bpf_usdt_args1(x) ___bpf_usdt_args0(), ({ long _x; bpf_usdt_arg(ctx, 0, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args2(x, args...) ___bpf_usdt_args1(args), ({ long _x; bpf_usdt_arg(ctx, 1, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args3(x, args...) ___bpf_usdt_args2(args), ({ long _x; bpf_usdt_arg(ctx, 2, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args4(x, args...) ___bpf_usdt_args3(args), ({ long _x; bpf_usdt_arg(ctx, 3, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args5(x, args...) ___bpf_usdt_args4(args), ({ long _x; bpf_usdt_arg(ctx, 4, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args6(x, args...) ___bpf_usdt_args5(args), ({ long _x; bpf_usdt_arg(ctx, 5, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args7(x, args...) ___bpf_usdt_args6(args), ({ long _x; bpf_usdt_arg(ctx, 6, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args8(x, args...) ___bpf_usdt_args7(args), ({ long _x; bpf_usdt_arg(ctx, 7, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args9(x, args...) ___bpf_usdt_args8(args), ({ long _x; bpf_usdt_arg(ctx, 8, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args10(x, args...) ___bpf_usdt_args9(args), ({ long _x; bpf_usdt_arg(ctx, 9, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args11(x, args...) ___bpf_usdt_args10(args), ({ long _x; bpf_usdt_arg(ctx, 10, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args12(x, args...) ___bpf_usdt_args11(args), ({ long _x; bpf_usdt_arg(ctx, 11, &_x); (void *)_x; })
|
||||||
|
#define ___bpf_usdt_args(args...) ___bpf_apply(___bpf_usdt_args, ___bpf_narg(args))(args)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* BPF_USDT serves the same purpose for USDT handlers as BPF_PROG for
|
||||||
|
* tp_btf/fentry/fexit BPF programs and BPF_KPROBE for kprobes.
|
||||||
|
* Original struct pt_regs * context is preserved as 'ctx' argument.
|
||||||
|
*/
|
||||||
|
#define BPF_USDT(name, args...) \
|
||||||
|
name(struct pt_regs *ctx); \
|
||||||
|
static __always_inline typeof(name(0)) \
|
||||||
|
____##name(struct pt_regs *ctx, ##args); \
|
||||||
|
typeof(name(0)) name(struct pt_regs *ctx) \
|
||||||
|
{ \
|
||||||
|
_Pragma("GCC diagnostic push") \
|
||||||
|
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
|
||||||
|
return ____##name(___bpf_usdt_args(args)); \
|
||||||
|
_Pragma("GCC diagnostic pop") \
|
||||||
|
} \
|
||||||
|
static __always_inline typeof(name(0)) \
|
||||||
|
____##name(struct pt_regs *ctx, ##args)
|
||||||
|
|
||||||
|
#endif /* __USDT_BPF_H__ */
|
1731
src/usdt.c
Normal file
336
src/xsk.h
@ -1,336 +0,0 @@
|
|||||||
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
|
|
||||||
|
|
||||||
/*
|
|
||||||
* AF_XDP user-space access library.
|
|
||||||
*
|
|
||||||
* Copyright (c) 2018 - 2019 Intel Corporation.
|
|
||||||
* Copyright (c) 2019 Facebook
|
|
||||||
*
|
|
||||||
* Author(s): Magnus Karlsson <magnus.karlsson@intel.com>
|
|
||||||
*/
|
|
||||||
|
|
||||||
#ifndef __LIBBPF_XSK_H
|
|
||||||
#define __LIBBPF_XSK_H
|
|
||||||
|
|
||||||
#include <stdio.h>
|
|
||||||
#include <stdint.h>
|
|
||||||
#include <stdbool.h>
|
|
||||||
#include <linux/if_xdp.h>
|
|
||||||
|
|
||||||
#include "libbpf.h"
|
|
||||||
|
|
||||||
#ifdef __cplusplus
|
|
||||||
extern "C" {
|
|
||||||
#endif
|
|
||||||
|
|
||||||
/* This whole API has been deprecated and moved to libxdp that can be found at
|
|
||||||
* https://github.com/xdp-project/xdp-tools. The APIs are exactly the same so
|
|
||||||
* it should just be linking with libxdp instead of libbpf for this set of
|
|
||||||
* functionality. If not, please submit a bug report on the aforementioned page.
|
|
||||||
*/
|
|
||||||
|
|
||||||
/* Load-Acquire Store-Release barriers used by the XDP socket
|
|
||||||
* library. The following macros should *NOT* be considered part of
|
|
||||||
* the xsk.h API, and is subject to change anytime.
|
|
||||||
*
|
|
||||||
* LIBRARY INTERNAL
|
|
||||||
*/
|
|
||||||
|
|
||||||
#define __XSK_READ_ONCE(x) (*(volatile typeof(x) *)&x)
|
|
||||||
#define __XSK_WRITE_ONCE(x, v) (*(volatile typeof(x) *)&x) = (v)
|
|
||||||
|
|
||||||
#if defined(__i386__) || defined(__x86_64__)
|
|
||||||
# define libbpf_smp_store_release(p, v) \
|
|
||||||
do { \
|
|
||||||
asm volatile("" : : : "memory"); \
|
|
||||||
__XSK_WRITE_ONCE(*p, v); \
|
|
||||||
} while (0)
|
|
||||||
# define libbpf_smp_load_acquire(p) \
|
|
||||||
({ \
|
|
||||||
typeof(*p) ___p1 = __XSK_READ_ONCE(*p); \
|
|
||||||
asm volatile("" : : : "memory"); \
|
|
||||||
___p1; \
|
|
||||||
})
|
|
||||||
#elif defined(__aarch64__)
|
|
||||||
# define libbpf_smp_store_release(p, v) \
|
|
||||||
asm volatile ("stlr %w1, %0" : "=Q" (*p) : "r" (v) : "memory")
|
|
||||||
# define libbpf_smp_load_acquire(p) \
|
|
||||||
({ \
|
|
||||||
typeof(*p) ___p1; \
|
|
||||||
asm volatile ("ldar %w0, %1" \
|
|
||||||
: "=r" (___p1) : "Q" (*p) : "memory"); \
|
|
||||||
___p1; \
|
|
||||||
})
|
|
||||||
#elif defined(__riscv)
|
|
||||||
# define libbpf_smp_store_release(p, v) \
|
|
||||||
do { \
|
|
||||||
asm volatile ("fence rw,w" : : : "memory"); \
|
|
||||||
__XSK_WRITE_ONCE(*p, v); \
|
|
||||||
} while (0)
|
|
||||||
# define libbpf_smp_load_acquire(p) \
|
|
||||||
({ \
|
|
||||||
typeof(*p) ___p1 = __XSK_READ_ONCE(*p); \
|
|
||||||
asm volatile ("fence r,rw" : : : "memory"); \
|
|
||||||
___p1; \
|
|
||||||
})
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#ifndef libbpf_smp_store_release
|
|
||||||
#define libbpf_smp_store_release(p, v) \
|
|
||||||
do { \
|
|
||||||
__sync_synchronize(); \
|
|
||||||
__XSK_WRITE_ONCE(*p, v); \
|
|
||||||
} while (0)
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#ifndef libbpf_smp_load_acquire
|
|
||||||
#define libbpf_smp_load_acquire(p) \
|
|
||||||
({ \
|
|
||||||
typeof(*p) ___p1 = __XSK_READ_ONCE(*p); \
|
|
||||||
__sync_synchronize(); \
|
|
||||||
___p1; \
|
|
||||||
})
|
|
||||||
#endif
|
|
||||||
|
|
||||||
/* LIBRARY INTERNAL -- END */
|
|
||||||
|
|
||||||
/* Do not access these members directly. Use the functions below. */
|
|
||||||
#define DEFINE_XSK_RING(name) \
|
|
||||||
struct name { \
|
|
||||||
__u32 cached_prod; \
|
|
||||||
__u32 cached_cons; \
|
|
||||||
__u32 mask; \
|
|
||||||
__u32 size; \
|
|
||||||
__u32 *producer; \
|
|
||||||
__u32 *consumer; \
|
|
||||||
void *ring; \
|
|
||||||
__u32 *flags; \
|
|
||||||
}
|
|
||||||
|
|
||||||
DEFINE_XSK_RING(xsk_ring_prod);
|
|
||||||
DEFINE_XSK_RING(xsk_ring_cons);
|
|
||||||
|
|
||||||
/* For a detailed explanation on the memory barriers associated with the
|
|
||||||
* ring, please take a look at net/xdp/xsk_queue.h.
|
|
||||||
*/
|
|
||||||
|
|
||||||
struct xsk_umem;
|
|
||||||
struct xsk_socket;
|
|
||||||
|
|
||||||
static inline __u64 *xsk_ring_prod__fill_addr(struct xsk_ring_prod *fill,
|
|
||||||
__u32 idx)
|
|
||||||
{
|
|
||||||
__u64 *addrs = (__u64 *)fill->ring;
|
|
||||||
|
|
||||||
return &addrs[idx & fill->mask];
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline const __u64 *
|
|
||||||
xsk_ring_cons__comp_addr(const struct xsk_ring_cons *comp, __u32 idx)
|
|
||||||
{
|
|
||||||
const __u64 *addrs = (const __u64 *)comp->ring;
|
|
||||||
|
|
||||||
return &addrs[idx & comp->mask];
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline struct xdp_desc *xsk_ring_prod__tx_desc(struct xsk_ring_prod *tx,
|
|
||||||
__u32 idx)
|
|
||||||
{
|
|
||||||
struct xdp_desc *descs = (struct xdp_desc *)tx->ring;
|
|
||||||
|
|
||||||
return &descs[idx & tx->mask];
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline const struct xdp_desc *
|
|
||||||
xsk_ring_cons__rx_desc(const struct xsk_ring_cons *rx, __u32 idx)
|
|
||||||
{
|
|
||||||
const struct xdp_desc *descs = (const struct xdp_desc *)rx->ring;
|
|
||||||
|
|
||||||
return &descs[idx & rx->mask];
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline int xsk_ring_prod__needs_wakeup(const struct xsk_ring_prod *r)
|
|
||||||
{
|
|
||||||
return *r->flags & XDP_RING_NEED_WAKEUP;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline __u32 xsk_prod_nb_free(struct xsk_ring_prod *r, __u32 nb)
|
|
||||||
{
|
|
||||||
__u32 free_entries = r->cached_cons - r->cached_prod;
|
|
||||||
|
|
||||||
if (free_entries >= nb)
|
|
||||||
return free_entries;
|
|
||||||
|
|
||||||
/* Refresh the local tail pointer.
|
|
||||||
* cached_cons is r->size bigger than the real consumer pointer so
|
|
||||||
* that this addition can be avoided in the more frequently
|
|
||||||
* executed code that computs free_entries in the beginning of
|
|
||||||
* this function. Without this optimization it whould have been
|
|
||||||
* free_entries = r->cached_prod - r->cached_cons + r->size.
|
|
||||||
*/
|
|
||||||
r->cached_cons = libbpf_smp_load_acquire(r->consumer);
|
|
||||||
r->cached_cons += r->size;
|
|
||||||
|
|
||||||
return r->cached_cons - r->cached_prod;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline __u32 xsk_cons_nb_avail(struct xsk_ring_cons *r, __u32 nb)
|
|
||||||
{
|
|
||||||
__u32 entries = r->cached_prod - r->cached_cons;
|
|
||||||
|
|
||||||
if (entries == 0) {
|
|
||||||
r->cached_prod = libbpf_smp_load_acquire(r->producer);
|
|
||||||
entries = r->cached_prod - r->cached_cons;
|
|
||||||
}
|
|
||||||
|
|
||||||
return (entries > nb) ? nb : entries;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline __u32 xsk_ring_prod__reserve(struct xsk_ring_prod *prod, __u32 nb, __u32 *idx)
|
|
||||||
{
|
|
||||||
if (xsk_prod_nb_free(prod, nb) < nb)
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
*idx = prod->cached_prod;
|
|
||||||
prod->cached_prod += nb;
|
|
||||||
|
|
||||||
return nb;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void xsk_ring_prod__submit(struct xsk_ring_prod *prod, __u32 nb)
|
|
||||||
{
|
|
||||||
/* Make sure everything has been written to the ring before indicating
|
|
||||||
* this to the kernel by writing the producer pointer.
|
|
||||||
*/
|
|
||||||
libbpf_smp_store_release(prod->producer, *prod->producer + nb);
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline __u32 xsk_ring_cons__peek(struct xsk_ring_cons *cons, __u32 nb, __u32 *idx)
|
|
||||||
{
|
|
||||||
__u32 entries = xsk_cons_nb_avail(cons, nb);
|
|
||||||
|
|
||||||
if (entries > 0) {
|
|
||||||
*idx = cons->cached_cons;
|
|
||||||
cons->cached_cons += entries;
|
|
||||||
}
|
|
||||||
|
|
||||||
return entries;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void xsk_ring_cons__cancel(struct xsk_ring_cons *cons, __u32 nb)
|
|
||||||
{
|
|
||||||
cons->cached_cons -= nb;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void xsk_ring_cons__release(struct xsk_ring_cons *cons, __u32 nb)
|
|
||||||
{
|
|
||||||
/* Make sure data has been read before indicating we are done
|
|
||||||
* with the entries by updating the consumer pointer.
|
|
||||||
*/
|
|
||||||
libbpf_smp_store_release(cons->consumer, *cons->consumer + nb);
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void *xsk_umem__get_data(void *umem_area, __u64 addr)
|
|
||||||
{
|
|
||||||
return &((char *)umem_area)[addr];
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline __u64 xsk_umem__extract_addr(__u64 addr)
|
|
||||||
{
|
|
||||||
return addr & XSK_UNALIGNED_BUF_ADDR_MASK;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline __u64 xsk_umem__extract_offset(__u64 addr)
|
|
||||||
{
|
|
||||||
return addr >> XSK_UNALIGNED_BUF_OFFSET_SHIFT;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline __u64 xsk_umem__add_offset_to_addr(__u64 addr)
|
|
||||||
{
|
|
||||||
return xsk_umem__extract_addr(addr) + xsk_umem__extract_offset(addr);
|
|
||||||
}
|
|
||||||
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 7, "AF_XDP support deprecated and moved to libxdp")
|
|
||||||
int xsk_umem__fd(const struct xsk_umem *umem);
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 7, "AF_XDP support deprecated and moved to libxdp")
|
|
||||||
int xsk_socket__fd(const struct xsk_socket *xsk);
|
|
||||||
|
|
||||||
#define XSK_RING_CONS__DEFAULT_NUM_DESCS 2048
|
|
||||||
#define XSK_RING_PROD__DEFAULT_NUM_DESCS 2048
|
|
||||||
#define XSK_UMEM__DEFAULT_FRAME_SHIFT 12 /* 4096 bytes */
|
|
||||||
#define XSK_UMEM__DEFAULT_FRAME_SIZE (1 << XSK_UMEM__DEFAULT_FRAME_SHIFT)
|
|
||||||
#define XSK_UMEM__DEFAULT_FRAME_HEADROOM 0
|
|
||||||
#define XSK_UMEM__DEFAULT_FLAGS 0
|
|
||||||
|
|
||||||
struct xsk_umem_config {
|
|
||||||
__u32 fill_size;
|
|
||||||
__u32 comp_size;
|
|
||||||
__u32 frame_size;
|
|
||||||
__u32 frame_headroom;
|
|
||||||
__u32 flags;
|
|
||||||
};
|
|
||||||
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 7, "AF_XDP support deprecated and moved to libxdp")
|
|
||||||
int xsk_setup_xdp_prog(int ifindex, int *xsks_map_fd);
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 7, "AF_XDP support deprecated and moved to libxdp")
|
|
||||||
int xsk_socket__update_xskmap(struct xsk_socket *xsk, int xsks_map_fd);
|
|
||||||
|
|
||||||
/* Flags for the libbpf_flags field. */
|
|
||||||
#define XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD (1 << 0)
|
|
||||||
|
|
||||||
struct xsk_socket_config {
|
|
||||||
__u32 rx_size;
|
|
||||||
__u32 tx_size;
|
|
||||||
__u32 libbpf_flags;
|
|
||||||
__u32 xdp_flags;
|
|
||||||
__u16 bind_flags;
|
|
||||||
};
|
|
||||||
|
|
||||||
/* Set config to NULL to get the default configuration. */
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 7, "AF_XDP support deprecated and moved to libxdp")
|
|
||||||
int xsk_umem__create(struct xsk_umem **umem,
|
|
||||||
void *umem_area, __u64 size,
|
|
||||||
struct xsk_ring_prod *fill,
|
|
||||||
struct xsk_ring_cons *comp,
|
|
||||||
const struct xsk_umem_config *config);
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 7, "AF_XDP support deprecated and moved to libxdp")
|
|
||||||
int xsk_umem__create_v0_0_2(struct xsk_umem **umem,
|
|
||||||
void *umem_area, __u64 size,
|
|
||||||
struct xsk_ring_prod *fill,
|
|
||||||
struct xsk_ring_cons *comp,
|
|
||||||
const struct xsk_umem_config *config);
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 7, "AF_XDP support deprecated and moved to libxdp")
|
|
||||||
int xsk_umem__create_v0_0_4(struct xsk_umem **umem,
|
|
||||||
void *umem_area, __u64 size,
|
|
||||||
struct xsk_ring_prod *fill,
|
|
||||||
struct xsk_ring_cons *comp,
|
|
||||||
const struct xsk_umem_config *config);
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 7, "AF_XDP support deprecated and moved to libxdp")
|
|
||||||
int xsk_socket__create(struct xsk_socket **xsk,
|
|
||||||
const char *ifname, __u32 queue_id,
|
|
||||||
struct xsk_umem *umem,
|
|
||||||
struct xsk_ring_cons *rx,
|
|
||||||
struct xsk_ring_prod *tx,
|
|
||||||
const struct xsk_socket_config *config);
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 7, "AF_XDP support deprecated and moved to libxdp")
|
|
||||||
int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
|
|
||||||
const char *ifname,
|
|
||||||
__u32 queue_id, struct xsk_umem *umem,
|
|
||||||
struct xsk_ring_cons *rx,
|
|
||||||
struct xsk_ring_prod *tx,
|
|
||||||
struct xsk_ring_prod *fill,
|
|
||||||
struct xsk_ring_cons *comp,
|
|
||||||
const struct xsk_socket_config *config);
|
|
||||||
|
|
||||||
/* Returns 0 for success and -EBUSY if the umem is still in use. */
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 7, "AF_XDP support deprecated and moved to libxdp")
|
|
||||||
int xsk_umem__delete(struct xsk_umem *umem);
|
|
||||||
LIBBPF_API LIBBPF_DEPRECATED_SINCE(0, 7, "AF_XDP support deprecated and moved to libxdp")
|
|
||||||
void xsk_socket__delete(struct xsk_socket *xsk);
|
|
||||||
|
|
||||||
#ifdef __cplusplus
|
|
||||||
} /* extern "C" */
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#endif /* __LIBBPF_XSK_H */
|
|
@ -1,35 +0,0 @@
|
|||||||
From: Kumar Kartikeya Dwivedi <memxor@gmail.com>
|
|
||||||
To: bpf@vger.kernel.org
|
|
||||||
Cc: Alexei Starovoitov <ast@kernel.org>,
|
|
||||||
Daniel Borkmann <daniel@iogearbox.net>,
|
|
||||||
Andrii Nakryiko <andrii@kernel.org>
|
|
||||||
Subject: [PATCH bpf-next] selftests/bpf: Fix OOB write in test_verifier
|
|
||||||
Date: Tue, 14 Dec 2021 07:18:00 +0530 [thread overview]
|
|
||||||
Message-ID: <20211214014800.78762-1-memxor@gmail.com> (raw)
|
|
||||||
|
|
||||||
The commit referenced below added fixup_map_timer support (to create a
|
|
||||||
BPF map containing timers), but failed to increase the size of the
|
|
||||||
map_fds array, leading to out of bounds write. Fix this by changing
|
|
||||||
MAX_NR_MAPS to 22.
|
|
||||||
|
|
||||||
Fixes: e60e6962c503 ("selftests/bpf: Add tests for restricted helpers")
|
|
||||||
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
|
|
||||||
---
|
|
||||||
tools/testing/selftests/bpf/test_verifier.c | 2 +-
|
|
||||||
1 file changed, 1 insertion(+), 1 deletion(-)
|
|
||||||
|
|
||||||
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
|
|
||||||
index ad5d30bafd93..33e2ecb3bef9 100644
|
|
||||||
--- a/tools/testing/selftests/bpf/test_verifier.c
|
|
||||||
+++ b/tools/testing/selftests/bpf/test_verifier.c
|
|
||||||
@@ -54,7 +54,7 @@
|
|
||||||
#define MAX_INSNS BPF_MAXINSNS
|
|
||||||
#define MAX_TEST_INSNS 1000000
|
|
||||||
#define MAX_FIXUPS 8
|
|
||||||
-#define MAX_NR_MAPS 21
|
|
||||||
+#define MAX_NR_MAPS 22
|
|
||||||
#define MAX_TEST_RUNS 8
|
|
||||||
#define POINTER_VALUE 0xcafe4all
|
|
||||||
#define TEST_DATA_LEN 64
|
|
||||||
--
|
|
||||||
2.34.1
|
|
@ -1,14 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
set -euox pipefail
|
|
||||||
|
|
||||||
CFLAGS=${CFLAGS:-}
|
|
||||||
|
|
||||||
cat << EOF > main.c
|
|
||||||
#include <bpf/libbpf.h>
|
|
||||||
int main() {
|
|
||||||
return bpf_object__open(0) < 0;
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# static linking
|
|
||||||
${CC:-cc} ${CFLAGS} -o main -I./install/usr/include main.c ./build/libbpf.a -lelf -lz
|
|
@ -1,106 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# This script is based on drgn script for generating Arch Linux bootstrap
|
|
||||||
# images.
|
|
||||||
# https://github.com/osandov/drgn/blob/master/scripts/vmtest/mkrootfs.sh
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
usage () {
|
|
||||||
USAGE_STRING="usage: $0 [NAME]
|
|
||||||
$0 -h
|
|
||||||
|
|
||||||
Build an Arch Linux root filesystem image for testing libbpf in a virtual
|
|
||||||
machine.
|
|
||||||
|
|
||||||
The image is generated as a zstd-compressed tarball.
|
|
||||||
|
|
||||||
This must be run as root, as most of the installation is done in a chroot.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
NAME name of generated image file (default:
|
|
||||||
libbpf-vmtest-rootfs-\$DATE.tar.zst)
|
|
||||||
|
|
||||||
Options:
|
|
||||||
-h display this help message and exit"
|
|
||||||
|
|
||||||
case "$1" in
|
|
||||||
out)
|
|
||||||
echo "$USAGE_STRING"
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
err)
|
|
||||||
echo "$USAGE_STRING" >&2
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
while getopts "h" OPT; do
|
|
||||||
case "$OPT" in
|
|
||||||
h)
|
|
||||||
usage out
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
usage err
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
if [[ $OPTIND -eq $# ]]; then
|
|
||||||
NAME="${!OPTIND}"
|
|
||||||
elif [[ $OPTIND -gt $# ]]; then
|
|
||||||
NAME="libbpf-vmtest-rootfs-$(date +%Y.%m.%d).tar.zst"
|
|
||||||
else
|
|
||||||
usage err
|
|
||||||
fi
|
|
||||||
|
|
||||||
pacman_conf=
|
|
||||||
root=
|
|
||||||
trap 'rm -rf "$pacman_conf" "$root"' EXIT
|
|
||||||
pacman_conf="$(mktemp -p "$PWD")"
|
|
||||||
cat > "$pacman_conf" << "EOF"
|
|
||||||
[options]
|
|
||||||
Architecture = x86_64
|
|
||||||
CheckSpace
|
|
||||||
SigLevel = Required DatabaseOptional
|
|
||||||
[core]
|
|
||||||
Include = /etc/pacman.d/mirrorlist
|
|
||||||
[extra]
|
|
||||||
Include = /etc/pacman.d/mirrorlist
|
|
||||||
[community]
|
|
||||||
Include = /etc/pacman.d/mirrorlist
|
|
||||||
EOF
|
|
||||||
root="$(mktemp -d -p "$PWD")"
|
|
||||||
|
|
||||||
packages=(
|
|
||||||
busybox
|
|
||||||
# libbpf dependencies.
|
|
||||||
libelf
|
|
||||||
zlib
|
|
||||||
# selftests test_progs dependencies.
|
|
||||||
binutils
|
|
||||||
elfutils
|
|
||||||
glibc
|
|
||||||
iproute2
|
|
||||||
# selftests test_verifier dependencies.
|
|
||||||
libcap
|
|
||||||
)
|
|
||||||
|
|
||||||
pacstrap -C "$pacman_conf" -cGM "$root" "${packages[@]}"
|
|
||||||
|
|
||||||
# Remove unnecessary files from the chroot.
|
|
||||||
|
|
||||||
# We don't need the pacman databases anymore.
|
|
||||||
rm -rf "$root/var/lib/pacman/sync/"
|
|
||||||
# We don't need D, Fortran, or Go.
|
|
||||||
rm -f "$root/usr/lib/libgdruntime."* \
|
|
||||||
"$root/usr/lib/libgphobos."* \
|
|
||||||
"$root/usr/lib/libgfortran."* \
|
|
||||||
"$root/usr/lib/libgo."*
|
|
||||||
# We don't need any documentation.
|
|
||||||
rm -rf "$root/usr/share/{doc,help,man,texinfo}"
|
|
||||||
|
|
||||||
"$(dirname "$0")"/mkrootfs_tweak.sh "$root"
|
|
||||||
|
|
||||||
tar -C "$root" -c . | zstd -T0 -19 -o "$NAME"
|
|
||||||
chmod 644 "$NAME"
|
|
@ -1,40 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# This script builds a Debian root filesystem image for testing libbpf in a
|
|
||||||
# virtual machine. Requires debootstrap >= 1.0.95 and zstd.
|
|
||||||
|
|
||||||
set -e -u -x -o pipefail
|
|
||||||
|
|
||||||
# Check whether we are root now in order to avoid confusing errors later.
|
|
||||||
if [ "$(id -u)" != 0 ]; then
|
|
||||||
echo "$0 must run as root" >&2
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create a working directory and schedule its deletion.
|
|
||||||
root=$(mktemp -d -p "$PWD")
|
|
||||||
trap 'rm -r "$root"' EXIT
|
|
||||||
|
|
||||||
# Install packages.
|
|
||||||
packages=binutils,busybox,elfutils,iproute2,libcap2,libelf1,strace,zlib1g
|
|
||||||
debootstrap --include="$packages" --variant=minbase bullseye "$root"
|
|
||||||
|
|
||||||
# Remove the init scripts (tests use their own). Also remove various
|
|
||||||
# unnecessary files in order to save space.
|
|
||||||
rm -rf \
|
|
||||||
"$root"/etc/rcS.d \
|
|
||||||
"$root"/usr/share/{doc,info,locale,man,zoneinfo} \
|
|
||||||
"$root"/var/cache/apt/archives/* \
|
|
||||||
"$root"/var/lib/apt/lists/*
|
|
||||||
|
|
||||||
# Save some more space by removing coreutils - the tests use busybox. Before
|
|
||||||
# doing that, delete the buggy postrm script, which uses the rm command.
|
|
||||||
rm -f "$root/var/lib/dpkg/info/coreutils.postrm"
|
|
||||||
chroot "$root" dpkg --remove --force-remove-essential coreutils
|
|
||||||
|
|
||||||
# Apply common tweaks.
|
|
||||||
"$(dirname "$0")"/mkrootfs_tweak.sh "$root"
|
|
||||||
|
|
||||||
# Save the result.
|
|
||||||
name="libbpf-vmtest-rootfs-$(date +%Y.%m.%d).tar.zst"
|
|
||||||
rm -f "$name"
|
|
||||||
tar -C "$root" -c . | zstd -T0 -19 -o "$name"
|
|
@ -1,61 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# This script prepares a mounted root filesystem for testing libbpf in a virtual
|
|
||||||
# machine.
|
|
||||||
set -e -u -x -o pipefail
|
|
||||||
root=$1
|
|
||||||
shift
|
|
||||||
|
|
||||||
chroot "${root}" /bin/busybox --install
|
|
||||||
|
|
||||||
cat > "$root/etc/inittab" << "EOF"
|
|
||||||
::sysinit:/etc/init.d/rcS
|
|
||||||
::ctrlaltdel:/sbin/reboot
|
|
||||||
::shutdown:/sbin/swapoff -a
|
|
||||||
::shutdown:/bin/umount -a -r
|
|
||||||
::restart:/sbin/init
|
|
||||||
EOF
|
|
||||||
chmod 644 "$root/etc/inittab"
|
|
||||||
|
|
||||||
mkdir -m 755 -p "$root/etc/init.d" "$root/etc/rcS.d"
|
|
||||||
cat > "$root/etc/rcS.d/S10-mount" << "EOF"
|
|
||||||
#!/bin/sh
|
|
||||||
|
|
||||||
set -eux
|
|
||||||
|
|
||||||
/bin/mount proc /proc -t proc
|
|
||||||
|
|
||||||
# Mount devtmpfs if not mounted
|
|
||||||
if [[ -z $(/bin/mount -t devtmpfs) ]]; then
|
|
||||||
/bin/mount devtmpfs /dev -t devtmpfs
|
|
||||||
fi
|
|
||||||
|
|
||||||
/bin/mount sysfs /sys -t sysfs
|
|
||||||
/bin/mount bpffs /sys/fs/bpf -t bpf
|
|
||||||
/bin/mount debugfs /sys/kernel/debug -t debugfs
|
|
||||||
|
|
||||||
echo 'Listing currently mounted file systems'
|
|
||||||
/bin/mount
|
|
||||||
EOF
|
|
||||||
chmod 755 "$root/etc/rcS.d/S10-mount"
|
|
||||||
|
|
||||||
cat > "$root/etc/rcS.d/S40-network" << "EOF"
|
|
||||||
#!/bin/sh
|
|
||||||
|
|
||||||
set -eux
|
|
||||||
|
|
||||||
ip link set lo up
|
|
||||||
EOF
|
|
||||||
chmod 755 "$root/etc/rcS.d/S40-network"
|
|
||||||
|
|
||||||
cat > "$root/etc/init.d/rcS" << "EOF"
|
|
||||||
#!/bin/sh
|
|
||||||
|
|
||||||
set -eux
|
|
||||||
|
|
||||||
for path in /etc/rcS.d/S*; do
|
|
||||||
[ -x "$path" ] && "$path"
|
|
||||||
done
|
|
||||||
EOF
|
|
||||||
chmod 755 "$root/etc/init.d/rcS"
|
|
||||||
|
|
||||||
chmod 755 "$root"
|
|
@ -1,74 +0,0 @@
|
|||||||
# IBM Z self-hosted builder
|
|
||||||
|
|
||||||
libbpf CI uses an IBM-provided z15 self-hosted builder. There are no IBM Z
|
|
||||||
builds of GitHub Actions runner, and stable qemu-user has problems with .NET
|
|
||||||
apps, so the builder runs the x86_64 runner version with qemu-user built from
|
|
||||||
the master branch.
|
|
||||||
|
|
||||||
## Configuring the builder.
|
|
||||||
|
|
||||||
### Install prerequisites.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ sudo dnf install docker # RHEL
|
|
||||||
$ sudo apt install -y docker.io # Ubuntu
|
|
||||||
```
|
|
||||||
|
|
||||||
### Add services.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ sudo cp *.service /etc/systemd/system/
|
|
||||||
$ sudo systemctl daemon-reload
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create a config file.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ sudo tee /etc/actions-runner-libbpf
|
|
||||||
repo=<owner>/<name>
|
|
||||||
access_token=<ghp_***>
|
|
||||||
```
|
|
||||||
|
|
||||||
Access token should have the repo scope, consult
|
|
||||||
https://docs.github.com/en/rest/reference/actions#create-a-registration-token-for-a-repository
|
|
||||||
for details.
|
|
||||||
|
|
||||||
### Autostart the x86_64 emulation support.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ sudo systemctl enable --now qemu-user-static
|
|
||||||
```
|
|
||||||
|
|
||||||
### Autostart the runner.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ sudo systemctl enable --now actions-runner-libbpf
|
|
||||||
```
|
|
||||||
|
|
||||||
## Rebuilding the image
|
|
||||||
|
|
||||||
In order to update the `iiilinuxibmcom/actions-runner-libbpf` image, e.g. to
|
|
||||||
get the latest OS security fixes, use the following commands:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ sudo docker build \
|
|
||||||
--pull \
|
|
||||||
-f actions-runner-libbpf.Dockerfile \
|
|
||||||
-t iiilinuxibmcom/actions-runner-libbpf \
|
|
||||||
.
|
|
||||||
$ sudo systemctl restart actions-runner-libbpf
|
|
||||||
```
|
|
||||||
|
|
||||||
## Removing persistent data
|
|
||||||
|
|
||||||
The `actions-runner-libbpf` service stores various temporary data, such as
|
|
||||||
runner registration information, work directories and logs, in the
|
|
||||||
`actions-runner-libbpf` volume. In order to remove it and start from scratch,
|
|
||||||
e.g. when upgrading the runner or switching it to a different repository, use
|
|
||||||
the following commands:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ sudo systemctl stop actions-runner-libbpf
|
|
||||||
$ sudo docker rm -f actions-runner-libbpf
|
|
||||||
$ sudo docker volume rm actions-runner-libbpf
|
|
||||||
```
|
|
@ -1,50 +0,0 @@
|
|||||||
# Self-Hosted IBM Z Github Actions Runner.
|
|
||||||
|
|
||||||
# Temporary image: amd64 dependencies.
|
|
||||||
FROM amd64/ubuntu:20.04 as ld-prefix
|
|
||||||
ENV DEBIAN_FRONTEND=noninteractive
|
|
||||||
RUN apt-get update && apt-get -y install ca-certificates libicu66 libssl1.1
|
|
||||||
|
|
||||||
# Main image.
|
|
||||||
FROM s390x/ubuntu:20.04
|
|
||||||
|
|
||||||
# Packages for libbpf testing that are not installed by .github/actions/setup.
|
|
||||||
ENV DEBIAN_FRONTEND=noninteractive
|
|
||||||
RUN apt-get update && apt-get -y install \
|
|
||||||
bc \
|
|
||||||
bison \
|
|
||||||
cmake \
|
|
||||||
cpu-checker \
|
|
||||||
curl \
|
|
||||||
flex \
|
|
||||||
git \
|
|
||||||
jq \
|
|
||||||
linux-image-generic \
|
|
||||||
qemu-system-s390x \
|
|
||||||
rsync \
|
|
||||||
software-properties-common \
|
|
||||||
sudo \
|
|
||||||
tree
|
|
||||||
|
|
||||||
# amd64 dependencies.
|
|
||||||
COPY --from=ld-prefix / /usr/x86_64-linux-gnu/
|
|
||||||
RUN ln -fs ../lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /usr/x86_64-linux-gnu/lib64/
|
|
||||||
RUN ln -fs /etc/resolv.conf /usr/x86_64-linux-gnu/etc/
|
|
||||||
ENV QEMU_LD_PREFIX=/usr/x86_64-linux-gnu
|
|
||||||
|
|
||||||
# amd64 Github Actions Runner.
|
|
||||||
ARG version=2.285.0
|
|
||||||
RUN useradd -m actions-runner
|
|
||||||
RUN echo "actions-runner ALL=(ALL) NOPASSWD: ALL" >>/etc/sudoers
|
|
||||||
RUN echo "Defaults env_keep += \"DEBIAN_FRONTEND\"" >>/etc/sudoers
|
|
||||||
RUN usermod -a -G kvm actions-runner
|
|
||||||
USER actions-runner
|
|
||||||
ENV USER=actions-runner
|
|
||||||
WORKDIR /home/actions-runner
|
|
||||||
RUN curl -L https://github.com/actions/runner/releases/download/v${version}/actions-runner-linux-x64-${version}.tar.gz | tar -xz
|
|
||||||
VOLUME /home/actions-runner
|
|
||||||
|
|
||||||
# Scripts.
|
|
||||||
COPY fs/ /
|
|
||||||
ENTRYPOINT ["/usr/bin/entrypoint"]
|
|
||||||
CMD ["/usr/bin/actions-runner"]
|
|
@ -1,24 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=Self-Hosted IBM Z Github Actions Runner
|
|
||||||
Wants=qemu-user-static
|
|
||||||
After=qemu-user-static
|
|
||||||
StartLimitIntervalSec=0
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
Restart=always
|
|
||||||
ExecStart=/usr/bin/docker run \
|
|
||||||
--device=/dev/kvm \
|
|
||||||
--env-file=/etc/actions-runner-libbpf \
|
|
||||||
--init \
|
|
||||||
--interactive \
|
|
||||||
--name=actions-runner-libbpf \
|
|
||||||
--rm \
|
|
||||||
--volume=actions-runner-libbpf:/home/actions-runner \
|
|
||||||
iiilinuxibmcom/actions-runner-libbpf
|
|
||||||
ExecStop=/bin/sh -c "docker exec actions-runner-libbpf kill -INT -- -1"
|
|
||||||
ExecStop=/bin/sh -c "docker wait actions-runner-libbpf"
|
|
||||||
ExecStop=/bin/sh -c "docker rm actions-runner-libbpf"
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
@ -1,40 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
#
|
|
||||||
# Ephemeral runner startup script.
|
|
||||||
#
|
|
||||||
# Expects the following environment variables:
|
|
||||||
#
|
|
||||||
# - repo=<owner>/<name>
|
|
||||||
# - access_token=<ghp_***>
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e -u
|
|
||||||
|
|
||||||
# Check the cached registration token.
|
|
||||||
token_file=registration-token.json
|
|
||||||
set +e
|
|
||||||
expires_at=$(jq --raw-output .expires_at "$token_file" 2>/dev/null)
|
|
||||||
status=$?
|
|
||||||
set -e
|
|
||||||
if [[ $status -ne 0 || $(date +%s) -ge $(date -d "$expires_at" +%s) ]]; then
|
|
||||||
# Refresh the cached registration token.
|
|
||||||
curl \
|
|
||||||
-X POST \
|
|
||||||
-H "Accept: application/vnd.github.v3+json" \
|
|
||||||
-H "Authorization: token $access_token" \
|
|
||||||
"https://api.github.com/repos/$repo/actions/runners/registration-token" \
|
|
||||||
-o "$token_file"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# (Re-)register the runner.
|
|
||||||
registration_token=$(jq --raw-output .token "$token_file")
|
|
||||||
./config.sh remove --token "$registration_token" || true
|
|
||||||
./config.sh \
|
|
||||||
--url "https://github.com/$repo" \
|
|
||||||
--token "$registration_token" \
|
|
||||||
--labels z15 \
|
|
||||||
--ephemeral
|
|
||||||
|
|
||||||
# Run one job.
|
|
||||||
./run.sh
|
|
@ -1,35 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
#
|
|
||||||
# Container entrypoint that waits for all spawned processes.
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e -u
|
|
||||||
|
|
||||||
# /dev/kvm has host permissions, fix it.
|
|
||||||
if [ -e /dev/kvm ]; then
|
|
||||||
sudo chown root:kvm /dev/kvm
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create a FIFO and start reading from its read end.
|
|
||||||
tempdir=$(mktemp -d "/tmp/done.XXXXXXXXXX")
|
|
||||||
trap 'rm -r "$tempdir"' EXIT
|
|
||||||
done="$tempdir/pipe"
|
|
||||||
mkfifo "$done"
|
|
||||||
cat "$done" & waiter=$!
|
|
||||||
|
|
||||||
# Start the workload. Its descendants will inherit the FIFO's write end.
|
|
||||||
status=0
|
|
||||||
if [ "$#" -eq 0 ]; then
|
|
||||||
bash 9>"$done" || status=$?
|
|
||||||
else
|
|
||||||
"$@" 9>"$done" || status=$?
|
|
||||||
fi
|
|
||||||
|
|
||||||
# When the workload and all of its descendants exit, the FIFO's write end will
|
|
||||||
# be closed and `cat "$done"` will exit. Wait until it happens. This is needed
|
|
||||||
# in order to handle SelfUpdater, which the workload may start in background
|
|
||||||
# before exiting.
|
|
||||||
wait "$waiter"
|
|
||||||
|
|
||||||
exit "$status"
|
|
@ -1,11 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=Support for transparent execution of non-native binaries with QEMU user emulation
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=oneshot
|
|
||||||
# The source code for iiilinuxibmcom/qemu-user-static is at https://github.com/iii-i/qemu-user-static/tree/v6.1.0-1
|
|
||||||
# TODO: replace it with multiarch/qemu-user-static once version >6.1 is available
|
|
||||||
ExecStart=/usr/bin/docker run --rm --interactive --privileged iiilinuxibmcom/qemu-user-static --reset -p yes
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
@ -1,5 +0,0 @@
|
|||||||
# TEMPORARY
|
|
||||||
get_stack_raw_tp # spams with kernel warnings until next bpf -> bpf-next merge
|
|
||||||
stacktrace_build_id_nmi
|
|
||||||
stacktrace_build_id
|
|
||||||
task_fd_query_rawtp
|
|
@ -1,56 +0,0 @@
|
|||||||
# TEMPORARY
|
|
||||||
atomics # attach(add): actual -524 <= expected 0 (trampoline)
|
|
||||||
bpf_iter_setsockopt # JIT does not support calling kernel function (kfunc)
|
|
||||||
bloom_filter_map # failed to find kernel BTF type ID of '__x64_sys_getpgid': -3 (?)
|
|
||||||
bpf_tcp_ca # JIT does not support calling kernel function (kfunc)
|
|
||||||
bpf_loop # attaches to __x64_sys_nanosleep
|
|
||||||
bpf_mod_race # BPF trampoline
|
|
||||||
bpf_nf # JIT does not support calling kernel function
|
|
||||||
core_read_macros # unknown func bpf_probe_read#4 (overlapping)
|
|
||||||
d_path # failed to auto-attach program 'prog_stat': -524 (trampoline)
|
|
||||||
dummy_st_ops # test_run unexpected error: -524 (errno 524) (trampoline)
|
|
||||||
fentry_fexit # fentry attach failed: -524 (trampoline)
|
|
||||||
fentry_test # fentry_first_attach unexpected error: -524 (trampoline)
|
|
||||||
fexit_bpf2bpf # freplace_attach_trace unexpected error: -524 (trampoline)
|
|
||||||
fexit_sleep # fexit_skel_load fexit skeleton failed (trampoline)
|
|
||||||
fexit_stress # fexit attach failed prog 0 failed: -524 (trampoline)
|
|
||||||
fexit_test # fexit_first_attach unexpected error: -524 (trampoline)
|
|
||||||
get_func_args_test # trampoline
|
|
||||||
get_func_ip_test # get_func_ip_test__attach unexpected error: -524 (trampoline)
|
|
||||||
get_stack_raw_tp # user_stack corrupted user stack (no backchain userspace)
|
|
||||||
kfree_skb # attach fentry unexpected error: -524 (trampoline)
|
|
||||||
kfunc_call # 'bpf_prog_active': not found in kernel BTF (?)
|
|
||||||
ksyms_module # test_ksyms_module__open_and_load unexpected error: -9 (?)
|
|
||||||
ksyms_module_libbpf # JIT does not support calling kernel function (kfunc)
|
|
||||||
ksyms_module_lskel # test_ksyms_module_lskel__open_and_load unexpected error: -9 (?)
|
|
||||||
modify_return # modify_return attach failed: -524 (trampoline)
|
|
||||||
module_attach # skel_attach skeleton attach failed: -524 (trampoline)
|
|
||||||
netcnt # failed to load BPF skeleton 'netcnt_prog': -7 (?)
|
|
||||||
probe_user # check_kprobe_res wrong kprobe res from probe read (?)
|
|
||||||
recursion # skel_attach unexpected error: -524 (trampoline)
|
|
||||||
ringbuf # skel_load skeleton load failed (?)
|
|
||||||
sk_assign # Can't read on server: Invalid argument (?)
|
|
||||||
sk_storage_tracing # test_sk_storage_tracing__attach unexpected error: -524 (trampoline)
|
|
||||||
skc_to_unix_sock # could not attach BPF object unexpected error: -524 (trampoline)
|
|
||||||
socket_cookie # prog_attach unexpected error: -524 (trampoline)
|
|
||||||
stacktrace_build_id # compare_map_keys stackid_hmap vs. stackmap err -2 errno 2 (?)
|
|
||||||
tailcalls # tail_calls are not allowed in non-JITed programs with bpf-to-bpf calls (?)
|
|
||||||
task_local_storage # failed to auto-attach program 'trace_exit_creds': -524 (trampoline)
|
|
||||||
test_bpffs # bpffs test failed 255 (iterator)
|
|
||||||
test_bprm_opts # failed to auto-attach program 'secure_exec': -524 (trampoline)
|
|
||||||
test_ima # failed to auto-attach program 'ima': -524 (trampoline)
|
|
||||||
test_local_storage # failed to auto-attach program 'unlink_hook': -524 (trampoline)
|
|
||||||
test_lsm # failed to find kernel BTF type ID of '__x64_sys_setdomainname': -3 (?)
|
|
||||||
test_overhead # attach_fentry unexpected error: -524 (trampoline)
|
|
||||||
test_profiler # unknown func bpf_probe_read_str#45 (overlapping)
|
|
||||||
timer # failed to auto-attach program 'test1': -524 (trampoline)
|
|
||||||
timer_mim # failed to auto-attach program 'test1': -524 (trampoline)
|
|
||||||
trace_ext # failed to auto-attach program 'test_pkt_md_access_new': -524 (trampoline)
|
|
||||||
trace_printk # trace_printk__load unexpected error: -2 (errno 2) (?)
|
|
||||||
trace_vprintk # trace_vprintk__open_and_load unexpected error: -9 (?)
|
|
||||||
trampoline_count # prog 'prog1': failed to attach: ERROR: strerror_r(-524)=22 (trampoline)
|
|
||||||
verif_stats # trace_vprintk__open_and_load unexpected error: -9 (?)
|
|
||||||
vmlinux # failed to auto-attach program 'handle__fentry': -524 (trampoline)
|
|
||||||
xdp_adjust_tail # case-128 err 0 errno 28 retval 1 size 128 expect-size 3520 (?)
|
|
||||||
xdp_bonding # failed to auto-attach program 'trace_on_entry': -524 (trampoline)
|
|
||||||
xdp_bpf2bpf # failed to auto-attach program 'trace_on_entry': -524 (trampoline)
|
|
@ -1,63 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
source $(cd $(dirname $0) && pwd)/helpers.sh
|
|
||||||
|
|
||||||
ARCH=$(uname -m)
|
|
||||||
|
|
||||||
STATUS_FILE=/exitstatus
|
|
||||||
|
|
||||||
read_lists() {
|
|
||||||
(for path in "$@"; do
|
|
||||||
if [[ -s "$path" ]]; then
|
|
||||||
cat "$path"
|
|
||||||
fi;
|
|
||||||
done) | cut -d'#' -f1 | tr -s ' \t\n' ','
|
|
||||||
}
|
|
||||||
|
|
||||||
test_progs() {
|
|
||||||
if [[ "${KERNEL}" != '4.9.0' ]]; then
|
|
||||||
travis_fold start test_progs "Testing test_progs"
|
|
||||||
# "&& true" does not change the return code (it is not executed
|
|
||||||
# if the Python script fails), but it prevents exiting on a
|
|
||||||
# failure due to the "set -e".
|
|
||||||
./test_progs ${BLACKLIST:+-d$BLACKLIST} ${WHITELIST:+-a$WHITELIST} && true
|
|
||||||
echo "test_progs:$?" >> "${STATUS_FILE}"
|
|
||||||
travis_fold end test_progs
|
|
||||||
fi
|
|
||||||
|
|
||||||
travis_fold start test_progs-no_alu32 "Testing test_progs-no_alu32"
|
|
||||||
./test_progs-no_alu32 ${BLACKLIST:+-d$BLACKLIST} ${WHITELIST:+-a$WHITELIST} && true
|
|
||||||
echo "test_progs-no_alu32:$?" >> "${STATUS_FILE}"
|
|
||||||
travis_fold end test_progs-no_alu32
|
|
||||||
}
|
|
||||||
|
|
||||||
test_maps() {
|
|
||||||
travis_fold start test_maps "Testing test_maps"
|
|
||||||
./test_maps && true
|
|
||||||
echo "test_maps:$?" >> "${STATUS_FILE}"
|
|
||||||
travis_fold end test_maps
|
|
||||||
}
|
|
||||||
|
|
||||||
test_verifier() {
|
|
||||||
travis_fold start test_verifier "Testing test_verifier"
|
|
||||||
./test_verifier && true
|
|
||||||
echo "test_verifier:$?" >> "${STATUS_FILE}"
|
|
||||||
travis_fold end test_verifier
|
|
||||||
}
|
|
||||||
|
|
||||||
travis_fold end vm_init
|
|
||||||
|
|
||||||
configs_path=${PROJECT_NAME}/vmtest/configs
|
|
||||||
BLACKLIST=$(read_lists "$configs_path/blacklist/BLACKLIST-${KERNEL}" "$configs_path/blacklist/BLACKLIST-${KERNEL}.${ARCH}")
|
|
||||||
WHITELIST=$(read_lists "$configs_path/whitelist/WHITELIST-${KERNEL}" "$configs_path/whitelist/WHITELIST-${KERNEL}.${ARCH}")
|
|
||||||
|
|
||||||
cd ${PROJECT_NAME}/selftests/bpf
|
|
||||||
|
|
||||||
test_progs
|
|
||||||
|
|
||||||
if [[ "${KERNEL}" == 'latest' ]]; then
|
|
||||||
# test_maps
|
|
||||||
test_verifier
|
|
||||||
fi
|
|