Commit Graph

324 Commits

Author SHA1 Message Date
Andrii Nakryiko
bada95a5f3 libbpf: Make btf__resolve_size logic always check size error condition
Perform size check always in btf__resolve_size. Makes the logic a bit more
robust against corrupted BTF and silences LGTM/Coverity complaining about
always true (size < 0) check.

Fixes: 69eaab04c675 ("btf: extract BTF type size calculation")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191107020855.3834758-5-andriin@fb.com
2019-11-13 16:39:58 -08:00
Andrii Nakryiko
fb929625dc libbpf: Fix another potential overflow issue in bpf_prog_linfo
Fix few issues found by Coverity and LGTM.

Fixes: b053b439b72a ("bpf: libbpf: bpftool: Print bpf_line_info during prog dump")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191107020855.3834758-4-andriin@fb.com
2019-11-13 16:39:58 -08:00
Andrii Nakryiko
1a828b3d58 libbpf: Fix potential overflow issue
Fix a potential overflow issue found by LGTM analysis, based on Github libbpf
source code.

Fixes: 3d65014146c6 ("bpf: libbpf: Add btf_line_info support to libbpf")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191107020855.3834758-3-andriin@fb.com
2019-11-13 16:39:58 -08:00
Andrii Nakryiko
330f4683e2 libbpf: Fix memory leak/double free issue
Coverity scan against Github libbpf code found the issue of not freeing memory and
leaving already freed memory still referenced from bpf_program. Fix it by
re-assigning successfully reallocated memory sooner.

Fixes: 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191107020855.3834758-2-andriin@fb.com
2019-11-13 16:39:58 -08:00
Andrii Nakryiko
2ef7f5607c libbpf: Fix negative FD close() in xsk_setup_xdp_prog()
Fix issue reported by static analysis (Coverity). If bpf_prog_get_fd_by_id()
fails, xsk_lookup_bpf_maps() will fail as well and clean-up code will attempt
close() with fd=-1. Fix by checking bpf_prog_get_fd_by_id() return result and
exiting early.

Fixes: 10a13bb40e54 ("libbpf: remove qidconf and better support external bpf programs.")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191107054059.313884-1-andriin@fb.com
2019-11-13 16:39:58 -08:00
Andrii Nakryiko
4da243c179 sync: latest libbpf changes from kernel
Syncing latest libbpf commits from kernel repository.
Baseline bpf-next commit:   f23c7ce341c2dfd187d4e3712ba6c110969463a0
Checkpoint bpf-next commit: ed578021210e14f15a654c825fba6a700c9a39a7
Baseline bpf commit:        7de086909365cd60a5619a45af3f4152516fd75c
Checkpoint bpf commit:      7de086909365cd60a5619a45af3f4152516fd75c

Andrii Nakryiko (1):
  libbpf: Simplify BPF_CORE_READ_BITFIELD_PROBED usage

 src/bpf_core_read.h | 27 +++++++++++----------------
 1 file changed, 11 insertions(+), 16 deletions(-)

--
2.17.1
2019-11-06 14:11:45 -08:00
Andrii Nakryiko
4d8fc6d438 libbpf: Simplify BPF_CORE_READ_BITFIELD_PROBED usage
Streamline BPF_CORE_READ_BITFIELD_PROBED interface to follow
BPF_CORE_READ_BITFIELD (direct) and BPF_CORE_READ, in general, i.e., just
return read result or 0, if underlying bpf_probe_read() failed.

In practice, real applications rarely check bpf_probe_read() result, because
it has to always work or otherwise it's a bug. So propagating internal
bpf_probe_read() error from this macro hurts usability without providing real
benefits in practice. This patch fixes the issue and simplifies usage,
noticeable even in selftest itself.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20191106201500.2582438-1-andriin@fb.com
2019-11-06 14:11:45 -08:00
Andrii Nakryiko
6d4abdda08 sync: latest libbpf changes from kernel
Syncing latest libbpf commits from kernel repository.
Baseline bpf-next commit:   a566e35f1e8b4b3be1e96a804d1cca38b578167c
Checkpoint bpf-next commit: f23c7ce341c2dfd187d4e3712ba6c110969463a0
Baseline bpf commit:        fc11078dd3514c65eabce166b8431a56d8a667cb
Checkpoint bpf commit:      7de086909365cd60a5619a45af3f4152516fd75c

Alexei Starovoitov (1):
  libbpf: Add support for prog_tracing

Andrii Nakryiko (2):
  libbpf: Add support for relocatable bitfields
  libbpf: Add support for field size relocations

Daniel Borkmann (1):
  bpf: Add probe_read_{user, kernel} and probe_read_{user, kernel}_str
    helpers

Toke Høiland-Jørgensen (4):
  libbpf: Fix error handling in bpf_map__reuse_fd()
  libbpf: Store map pin path and status in struct bpf_map
  libbpf: Move directory creation into _pin() functions
  libbpf: Add auto-pinning of maps when loading BPF objects

 include/uapi/linux/bpf.h | 124 ++++---
 src/bpf.c                |   8 +-
 src/bpf.h                |   5 +-
 src/bpf_core_read.h      |  79 +++++
 src/bpf_helpers.h        |   6 +
 src/libbpf.c             | 707 ++++++++++++++++++++++++++++++---------
 src/libbpf.h             |  23 +-
 src/libbpf.map           |   5 +
 src/libbpf_internal.h    |   4 +
 src/libbpf_probes.c      |   1 +
 10 files changed, 749 insertions(+), 213 deletions(-)

--
2.17.1
2019-11-05 16:00:11 -08:00
Andrii Nakryiko
67ab4c0f82 sync: auto-generate latest BPF helpers
Latest changes to BPF helper definitions.
2019-11-05 16:00:11 -08:00
Andrii Nakryiko
df45cf7a3e libbpf: Add support for field size relocations
Add bpf_core_field_size() macro, capturing a relocation against field size.
Adjust bits of internal libbpf relocation logic to allow capturing size
relocations of various field types: arrays, structs/unions, enums, etc.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191101222810.1246166-4-andriin@fb.com
2019-11-05 16:00:11 -08:00
Andrii Nakryiko
4438972ccc libbpf: Add support for relocatable bitfields
Add support for the new field relocation kinds, necessary to support
relocatable bitfield reads. Provide macro for abstracting necessary code doing
full relocatable bitfield extraction into u64 value. Two separate macros are
provided:
- BPF_CORE_READ_BITFIELD macro for direct memory read-enabled BPF programs
(e.g., typed raw tracepoints). It uses direct memory dereference to extract
bitfield backing integer value.
- BPF_CORE_READ_BITFIELD_PROBED macro for cases where bpf_probe_read() needs
to be used to extract same backing integer value.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191101222810.1246166-3-andriin@fb.com
2019-11-05 16:00:11 -08:00
Daniel Borkmann
09cd9ff2db bpf: Add probe_read_{user, kernel} and probe_read_{user, kernel}_str helpers
The current bpf_probe_read() and bpf_probe_read_str() helpers are broken
in that they assume they can be used for probing memory access for kernel
space addresses /as well as/ user space addresses.

However, plain use of probe_kernel_read() for both cases will attempt to
always access kernel space address space given access is performed under
KERNEL_DS and some archs in-fact have overlapping address spaces where a
kernel pointer and user pointer would have the /same/ address value and
therefore accessing application memory via bpf_probe_read{,_str}() would
read garbage values.

Lets fix BPF side by making use of recently added 3d7081822f7f ("uaccess:
Add non-pagefault user-space read functions"). Unfortunately, the only way
to fix this status quo is to add dedicated bpf_probe_read_{user,kernel}()
and bpf_probe_read_{user,kernel}_str() helpers. The bpf_probe_read{,_str}()
helpers are kept as-is to retain their current behavior.

The two *_user() variants attempt the access always under USER_DS set, the
two *_kernel() variants will -EFAULT when accessing user memory if the
underlying architecture has non-overlapping address ranges, also avoiding
throwing the kernel warning via 00c42373d397 ("x86-64: add warning for
non-canonical user access address dereferences").

Fixes: a5e8c07059d0 ("bpf: add bpf_probe_read_str helper")
Fixes: 2541517c32be ("tracing, perf: Implement BPF programs attached to kprobes")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/796ee46e948bc808d54891a1108435f8652c6ca4.1572649915.git.daniel@iogearbox.net
2019-11-05 16:00:11 -08:00
Toke Høiland-Jørgensen
e7d860d2fc libbpf: Add auto-pinning of maps when loading BPF objects
This adds support to libbpf for setting map pinning information as part of
the BTF map declaration, to get automatic map pinning (and reuse) on load.
The pinning type currently only supports a single PIN_BY_NAME mode, where
each map will be pinned by its name in a path that can be overridden, but
defaults to /sys/fs/bpf.

Since auto-pinning only does something if any maps actually have a
'pinning' BTF attribute set, we default the new option to enabled, on the
assumption that seamless pinning is what most callers want.

When a map has a pin_path set at load time, libbpf will compare the map
pinned at that location (if any), and if the attributes match, will re-use
that map instead of creating a new one. If no existing map is found, the
newly created map will instead be pinned at the location.

Programs wanting to customise the pinning can override the pinning paths
using bpf_map__set_pin_path() before calling bpf_object__load() (including
setting it to NULL to disable pinning of a particular map).

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157269298092.394725.3966306029218559681.stgit@toke.dk
2019-11-05 16:00:11 -08:00
Toke Høiland-Jørgensen
ff3d2702d8 libbpf: Move directory creation into _pin() functions
The existing pin_*() functions all try to create the parent directory
before pinning. Move this check into the per-object _pin() functions
instead. This ensures consistent behaviour when auto-pinning is
added (which doesn't go through the top-level pin_maps() function), at the
cost of a few more calls to mkdir().

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157269297985.394725.5882630952992598610.stgit@toke.dk
2019-11-05 16:00:11 -08:00
Toke Høiland-Jørgensen
44f9712f79 libbpf: Store map pin path and status in struct bpf_map
Support storing and setting a pin path in struct bpf_map, which can be used
for automatic pinning. Also store the pin status so we can avoid attempts
to re-pin a map that has already been pinned (or reused from a previous
pinning).

The behaviour of bpf_object__{un,}pin_maps() is changed so that if it is
called with a NULL path argument (which was previously illegal), it will
(un)pin only those maps that have a pin_path set.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157269297876.394725.14782206533681896279.stgit@toke.dk
2019-11-05 16:00:11 -08:00
Toke Høiland-Jørgensen
fe4cb796df libbpf: Fix error handling in bpf_map__reuse_fd()
bpf_map__reuse_fd() was calling close() in the error path before returning
an error value based on errno. However, close can change errno, so that can
lead to potentially misleading error messages. Instead, explicitly store
errno in the err variable before each goto.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157269297769.394725.12634985106772698611.stgit@toke.dk
2019-11-05 16:00:11 -08:00
Alexei Starovoitov
15de8ad80d libbpf: Add support for prog_tracing
Cleanup libbpf from expected_attach_type == attach_btf_id hack
and introduce BPF_PROG_TYPE_TRACING.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191030223212.953010-3-ast@kernel.org
2019-11-05 16:00:11 -08:00
Frantisek Sumsal
d7a137510a coverity: explicitly use bash instead of sh
On Ubuntu `/bin/sh` is a symlink to `/bin/dash`, which doesn't support
certain builtins used by the Coverity script (namely pushd/popd)
2019-11-05 13:28:13 -08:00
Frantisek Sumsal
91e4f27dd7 travis: use sudo during the 'install' phase 2019-11-04 15:08:38 -08:00
Frantisek Sumsal
1339ef70a3 README: add Coverity badge 2019-11-01 23:22:57 -07:00
Frantisek Sumsal
c204e3d610 travis: automate Coverity builds 2019-11-01 23:22:57 -07:00
Frantisek Sumsal
32d0a03332 README: add a LGTM badge 2019-10-29 15:45:36 -07:00
Andrii Nakryiko
05346cfd90 sync: latest libbpf changes from kernel
Syncing latest libbpf commits from kernel repository.
Baseline bpf-next commit:   3820729160440158a014add69cc0d371061a96b2
Checkpoint bpf-next commit: a566e35f1e8b4b3be1e96a804d1cca38b578167c
Baseline bpf commit:        2afd23f78f39da84937006ecd24aa664a4ab052b
Checkpoint bpf commit:      fc11078dd3514c65eabce166b8431a56d8a667cb

Andrii Nakryiko (2):
  libbpf: Fix off-by-one error in ELF sanity check
  libbpf: Don't use kernel-side u32 type in xsk.c

Magnus Karlsson (1):
  libbpf: Fix compatibility for kernels without need_wakeup

 src/libbpf.c |  2 +-
 src/xsk.c    | 83 ++++++++++++++++++++++++++++++++++++++++++++--------
 2 files changed, 72 insertions(+), 13 deletions(-)

--
2.17.1
2019-10-29 09:25:36 -07:00
Andrii Nakryiko
a7a32b899c libbpf: Don't use kernel-side u32 type in xsk.c
u32 is a kernel-side typedef. User-space library is supposed to use __u32.
This breaks Github's projection of libbpf. Do u32 -> __u32 fix.

Fixes: 94ff9ebb49a5 ("libbpf: Fix compatibility for kernels without need_wakeup")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Cc: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/bpf/20191029055953.2461336-1-andriin@fb.com
2019-10-29 09:25:36 -07:00
Andrii Nakryiko
68a051f2d2 libbpf: Fix off-by-one error in ELF sanity check
libbpf's bpf_object__elf_collect() does simple sanity check after iterating
over all ELF sections, if checks that .strtab index is correct. Unfortunately,
due to section indices being 1-based, the check breaks for cases when .strtab
ends up being the very last section in ELF.

Fixes: 77ba9a5b48a7 ("tools lib bpf: Fetch map names from correct strtab")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191028233727.1286699-1-andriin@fb.com
2019-10-29 09:25:36 -07:00
Magnus Karlsson
8e80367637 libbpf: Fix compatibility for kernels without need_wakeup
When the need_wakeup flag was added to AF_XDP, the format of the
XDP_MMAP_OFFSETS getsockopt was extended. Code was added to the
kernel to take care of compatibility issues arrising from running
applications using any of the two formats. However, libbpf was
not extended to take care of the case when the application/libbpf
uses the new format but the kernel only supports the old
format. This patch adds support in libbpf for parsing the old
format, before the need_wakeup flag was added, and emulating a
set of static need_wakeup flags that will always work for the
application.

v2 -> v3:
* Incorporated code improvements suggested by Jonathan Lemon

v1 -> v2:
* Rebased to bpf-next
* Rewrote the code as the previous version made you blind

Fixes: a4500432c2587cb2a ("libbpf: add support for need_wakeup flag in AF_XDP part")
Reported-by: Eloy Degen <degeneloy@gmail.com>
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Link: https://lore.kernel.org/bpf/1571995035-21889-1-git-send-email-magnus.karlsson@intel.com
2019-10-29 09:25:36 -07:00
Andrii Nakryiko
9a5adecc62 sync: ignore test_libbpf.c
Adjust sync script to ignore test_libbpf.c, not test_libbpf.cpp.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
2019-10-29 09:25:36 -07:00
Frantisek Sumsal
b923d0e3c6 lgtm: fix the extraction process
As this project uses only Makefile, without any configuration step, and due to
a "non-standard" location of the source files, LGTM kept failing to find the
respective Makefile and build the sources. By tricking LGTM's build system
auto detection, that we use automake/configure, it correctly sets the source
dir, thus the compilation, extraction & analysis steps now work in the src/
subdirectory, as expected.
2019-10-28 15:15:47 -07:00
Andrii Nakryiko
f02e248ae1 sync: latest libbpf changes from kernel
Syncing latest libbpf commits from kernel repository.
Baseline bpf-next commit:   5e5b03d163e15a40b0fa57c70b4e8edd549b0b98
Checkpoint bpf-next commit: 3820729160440158a014add69cc0d371061a96b2
Baseline bpf commit:        cd7455f1013ef96d5cbf5c05d2b7c06f273810a6
Checkpoint bpf commit:      2afd23f78f39da84937006ecd24aa664a4ab052b

Björn Töpel (1):
  libbpf: Use implicit XSKMAP lookup from AF_XDP XDP program

KP Singh (1):
  libbpf: Fix strncat bounds error in libbpf_prog_type_by_name

 src/libbpf.c |  2 +-
 src/xsk.c    | 42 ++++++++++++++++++++++++++++++++----------
 2 files changed, 33 insertions(+), 11 deletions(-)

--
2.17.1
2019-10-24 22:59:06 -07:00
KP Singh
e152510d72 libbpf: Fix strncat bounds error in libbpf_prog_type_by_name
On compiling samples with this change, one gets an error:

 error: ‘strncat’ specified bound 118 equals destination size
  [-Werror=stringop-truncation]

    strncat(dst, name + section_names[i].len,
    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     sizeof(raw_tp_btf_name) - (dst - raw_tp_btf_name));
     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

strncat requires the destination to have enough space for the
terminating null byte.

Fixes: f75a697e09137 ("libbpf: Auto-detect btf_id of BTF-based raw_tracepoint")
Signed-off-by: KP Singh <kpsingh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191023154038.24075-1-kpsingh@chromium.org
2019-10-24 22:59:06 -07:00
Björn Töpel
59ac1946b0 libbpf: Use implicit XSKMAP lookup from AF_XDP XDP program
In commit 43e74c0267a3 ("bpf_xdp_redirect_map: Perform map lookup in
eBPF helper") the bpf_redirect_map() helper learned to do map lookup,
which means that the explicit lookup in the XDP program for AF_XDP is
not needed for post-5.3 kernels.

This commit adds the implicit map lookup with default action, which
improves the performance for the "rx_drop" [1] scenario with ~4%.

For pre-5.3 kernels, the bpf_redirect_map() returns XDP_ABORTED, and a
fallback path for backward compatibility is entered, where explicit
lookup is still performed. This means a slight regression for older
kernels (an additional bpf_redirect_map() call), but I consider that a
fair punishment for users not upgrading their kernels. ;-)

v1->v2: Backward compatibility (Toke) [2]
v2->v3: Avoid masking/zero-extension by using JMP32 [3]

[1] # xdpsock -i eth0 -z -r
[2] https://lore.kernel.org/bpf/87pnirb3dc.fsf@toke.dk/
[3] https://lore.kernel.org/bpf/87v9sip0i8.fsf@toke.dk/

Suggested-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20191022072206.6318-1-bjorn.topel@gmail.com
2019-10-24 22:59:06 -07:00
Andrii Nakryiko
5150a4a0fb includes: add BPF_JMP32_IMM macro to fix build
Recent xsk change started using new BPF_JMP32_IMM macro. Add it to our
local copy of include/linux/filter.h to fix the build.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
2019-10-24 22:59:06 -07:00
Frantisek Sumsal
2a25957df6 travis: add an aarch64 Xenial job 2019-10-23 10:13:54 -07:00
Andrii Nakryiko
e441f55089 sync: latest libbpf changes from kernel
Syncing latest libbpf commits from kernel repository.
Baseline bpf-next commit:   da927466a152a9497c05926a95c6aebba6d3ad5b
Checkpoint bpf-next commit: 5e5b03d163e15a40b0fa57c70b4e8edd549b0b98
Baseline bpf commit:        9e8acd9c44a0dd52b2922eeb82398c04e356c058
Checkpoint bpf commit:      cd7455f1013ef96d5cbf5c05d2b7c06f273810a6

Alexei Starovoitov (3):
  bpf: Add attach_btf_id attribute to program load
  libbpf: Auto-detect btf_id of BTF-based raw_tracepoints
  bpf: Check types of arguments passed into helpers

Andrii Nakryiko (5):
  tools: Sync if_link.h
  libbpf: Add bpf_program__get_{type, expected_attach_type) APIs
  libbpf: Add uprobe/uretprobe and tp/raw_tp section suffixes
  libbpf: Teach bpf_object__open to guess program types
  libbpf: Make DECLARE_LIBBPF_OPTS macro strictly a variable declaration

John Fastabend (1):
  bpf, libbpf: Add kernel version section parsing back

Kefeng Wang (1):
  tools, bpf: Rename pr_warning to pr_warn to align with kernel logging

 include/uapi/linux/bpf.h     |  28 +-
 include/uapi/linux/if_link.h |   2 +
 src/bpf.c                    |   3 +
 src/btf.c                    |  56 +--
 src/btf_dump.c               |  18 +-
 src/libbpf.c                 | 830 +++++++++++++++++++----------------
 src/libbpf.h                 |  24 +-
 src/libbpf.map               |   2 +
 src/libbpf_internal.h        |   8 +-
 src/xsk.c                    |   4 +-
 10 files changed, 539 insertions(+), 436 deletions(-)

--
2.17.1
2019-10-22 16:15:55 -07:00
Andrii Nakryiko
beb9f88080 sync: auto-generate latest BPF helpers
Latest changes to BPF helper definitions.
2019-10-22 16:15:55 -07:00
Andrii Nakryiko
c7b5116f71 libbpf: Make DECLARE_LIBBPF_OPTS macro strictly a variable declaration
LIBBPF_OPTS is implemented as a mix of field declaration and memset
+ assignment. This makes it neither variable declaration nor purely
statements, which is a problem, because you can't mix it with either
other variable declarations nor other function statements, because C90
compiler mode emits warning on mixing all that together.

This patch changes LIBBPF_OPTS into a strictly declaration of variable
and solves this problem, as can be seen in case of bpftool, which
previously would emit compiler warning, if done this way (LIBBPF_OPTS as
part of function variables declaration block).

This patch also renames LIBBPF_OPTS into DECLARE_LIBBPF_OPTS to follow
kernel convention for similar macros more closely.

v1->v2:
- rename LIBBPF_OPTS into DECLARE_LIBBPF_OPTS (Jakub Sitnicki).

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20191022172100.3281465-1-andriin@fb.com
2019-10-22 16:15:55 -07:00
Andrii Nakryiko
2b0cd55bf5 libbpf: Teach bpf_object__open to guess program types
Teach bpf_object__open how to guess program type and expected attach
type from section names, similar to what bpf_prog_load() does. This
seems like a really useful features and an oversight to not have this
done during bpf_object_open(). To preserver backwards compatible
behavior of bpf_prog_load(), its attr->prog_type is treated as an
override of bpf_object__open() decisions, if attr->prog_type is not
UNSPECIFIED.

There is a slight difference in behavior for bpf_prog_load().
Previously, if bpf_prog_load() was loading BPF object with more than one
program, first program's guessed program type and expected attach type
would determine corresponding attributes of all the subsequent program
types, even if their sections names suggest otherwise. That seems like
a rather dubious behavior and with this change it will behave more
sanely: each program's type is determined individually, unless they are
forced to uniformity through attr->prog_type.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191021033902.3856966-5-andriin@fb.com
2019-10-22 16:15:55 -07:00
Andrii Nakryiko
188276ca5f libbpf: Add uprobe/uretprobe and tp/raw_tp section suffixes
Map uprobe/uretprobe into KPROBE program type. tp/raw_tp are just an
alias for more verbose tracepoint/raw_tracepoint, respectively.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191021033902.3856966-4-andriin@fb.com
2019-10-22 16:15:55 -07:00
Andrii Nakryiko
87c4984da8 libbpf: Add bpf_program__get_{type, expected_attach_type) APIs
There are bpf_program__set_type() and
bpf_program__set_expected_attach_type(), but no corresponding getters,
which seems rather incomplete. Fix this.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191021033902.3856966-3-andriin@fb.com
2019-10-22 16:15:55 -07:00
Andrii Nakryiko
a5611ba6e8 tools: Sync if_link.h
Sync if_link.h into tools/ and get rid of annoying libbpf Makefile warning.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191021033902.3856966-2-andriin@fb.com
2019-10-22 16:15:55 -07:00
Kefeng Wang
c6e01425b6 tools, bpf: Rename pr_warning to pr_warn to align with kernel logging
For kernel logging macros, pr_warning() is completely removed and
replaced by pr_warn(). By using pr_warn() in tools/lib/bpf/ for
symmetry to kernel logging macros, we could eventually drop the
use of pr_warning() in the whole kernel tree.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191021055532.185245-1-wangkefeng.wang@huawei.com
2019-10-22 16:15:55 -07:00
John Fastabend
58e3a8fac1 bpf, libbpf: Add kernel version section parsing back
With commit "libbpf: stop enforcing kern_version,..." we removed the
kernel version section parsing in favor of querying for the kernel
using uname() and populating the version using the result of the
query. After this any version sections were simply ignored.

Unfortunately, the world of kernels is not so friendly. I've found some
customized kernels where uname() does not match the in kernel version.
To fix this so programs can load in this environment this patch adds
back parsing the section and if it exists uses the user specified
kernel version to override the uname() result. However, keep most the
kernel uname() discovery bits so users are not required to insert the
version except in these odd cases.

Fixes: 5e61f27070292 ("libbpf: stop enforcing kern_version, populate it for users")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157140968634.9073.6407090804163937103.stgit@john-XPS-13-9370
2019-10-22 16:15:55 -07:00
Alexei Starovoitov
1b27702c14 bpf: Check types of arguments passed into helpers
Introduce new helper that reuses existing skb perf_event output
implementation, but can be called from raw_tracepoint programs
that receive 'struct sk_buff *' as tracepoint argument or
can walk other kernel data structures to skb pointer.

In order to do that teach verifier to resolve true C types
of bpf helpers into in-kernel BTF ids.
The type of kernel pointer passed by raw tracepoint into bpf
program will be tracked by the verifier all the way until
it's passed into helper function.
For example:
kfree_skb() kernel function calls trace_kfree_skb(skb, loc);
bpf programs receives that skb pointer and may eventually
pass it into bpf_skb_output() bpf helper which in-kernel is
implemented via bpf_skb_event_output() kernel function.
Its first argument in the kernel is 'struct sk_buff *'.
The verifier makes sure that types match all the way.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-11-ast@kernel.org
2019-10-22 16:15:55 -07:00
Alexei Starovoitov
39cf9fc90f libbpf: Auto-detect btf_id of BTF-based raw_tracepoints
It's a responsiblity of bpf program author to annotate the program
with SEC("tp_btf/name") where "name" is a valid raw tracepoint.
The libbpf will try to find "name" in vmlinux BTF and error out
in case vmlinux BTF is not available or "name" is not found.
If "name" is indeed a valid raw tracepoint then in-kernel BTF
will have "btf_trace_##name" typedef that points to function
prototype of that raw tracepoint. BTF description captures
exact argument the kernel C code is passing into raw tracepoint.
The kernel verifier will check the types while loading bpf program.

libbpf keeps BTF type id in expected_attach_type, but since
kernel ignores this attribute for tracing programs copy it
into attach_btf_id attribute before loading.

Later the kernel will use prog->attach_btf_id to select raw tracepoint
during bpf_raw_tracepoint_open syscall command.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-6-ast@kernel.org
2019-10-22 16:15:55 -07:00
Alexei Starovoitov
bc4a6e9709 bpf: Add attach_btf_id attribute to program load
Add attach_btf_id attribute to prog_load command.
It's similar to existing expected_attach_type attribute which is
used in several cgroup based program types.
Unfortunately expected_attach_type is ignored for
tracing programs and cannot be reused for new purpose.
Hence introduce attach_btf_id to verify bpf programs against
given in-kernel BTF type id at load time.
It is strictly checked to be valid for raw_tp programs only.
In a later patches it will become:
btf_id == 0 semantics of existing raw_tp progs.
btd_id > 0 raw_tp with BTF and additional type safety.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-5-ast@kernel.org
2019-10-22 16:15:55 -07:00
Andrii Nakryiko
4a50ceb043 Makefile: back-port _FILE_OFFSET_BITS=64 and _LARGEFILE64_SOURCE to Makefile
Upstream commit 71dd77fd4bf7 ("libbpf: use LFS (_FILE_OFFSET_BITS) instead
of direct mmap2 syscall") added _FILE_OFFSET_BITS=64 and
_LARGEFILE64_SOURCE CFLAGS. Back-port them to Github's mirror to avoid
compilation problems on ARM.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
2019-10-22 14:50:23 -07:00
Andrii Nakryiko
4d86cae4f0 ci: disable GCC's -Wstringop-truncation noisy error
This error is usually a false positive for us. Disable it.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
2019-10-15 19:43:48 -07:00
Andrii Nakryiko
33b374395f sync: adjust sync script for test_libbpf.c rename and bpf_helper_defs.h
Accomodate changes:
- test_libbpf.cpp was renamed to test_libbpf.c;
- bpf_helper_defs.h should be ignored for consistency check at the end,
  as it's not checked in on linux side;

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
2019-10-15 19:43:48 -07:00
Andrii Nakryiko
ade4409352 sync: latest libbpf changes from kernel
Syncing latest libbpf commits from kernel repository.
Baseline bpf-next commit:   f05c2001ecc98629cecd47728e4db11e5a17e58d
Checkpoint bpf-next commit: da927466a152a9497c05926a95c6aebba6d3ad5b
Baseline bpf commit:        106c35dda32f8b63f88cad7433f1b8bb0056958a
Checkpoint bpf commit:      9e8acd9c44a0dd52b2922eeb82398c04e356c058

Andrii Nakryiko (7):
  libbpf: Fix struct end padding in btf_dump
  libbpf: Generate more efficient BPF_CORE_READ code
  libbpf: Handle invalid typedef emitted by old GCC
  libbpf: Update BTF reloc support to latest Clang format
  libbpf: Refactor bpf_object__open APIs to use common opts
  libbpf: Add support for field existance CO-RE relocation
  libbpf: Add BPF-side definitions of supported field relocation kinds

Ilya Maximets (1):
  libbpf: Fix passing uninitialized bytes to setsockopt

 src/bpf_core_read.h   |  28 ++++++-
 src/btf.c             |  16 ++--
 src/btf.h             |   4 +-
 src/btf_dump.c        |  19 ++++-
 src/libbpf.c          | 169 ++++++++++++++++++++++++++----------------
 src/libbpf.h          |   4 +-
 src/libbpf_internal.h |  25 +++++--
 src/xsk.c             |   1 +
 8 files changed, 180 insertions(+), 86 deletions(-)

--
2.17.1
2019-10-15 19:43:48 -07:00
Andrii Nakryiko
2f9abb2a26 sync: auto-generate latest BPF helpers
Latest changes to BPF helper definitions.
2019-10-15 19:43:48 -07:00