This patch replaces `bpf_map_.*` with `bpf_.*`.
Currently some types that are not used in helper functions are not generated for bindings - e.g. `bpf_sk_lookup`, `bpf_sockopt` and etc.
This patch replaces `bpf_map_.*` with `bpf_.*`.
Note, this PR does not include bindings files as it would be better to be created by auto script.
The missing bindings can be created by `cargo xtask codegen --libbpf-dir /<PATH_TO>/libbp`.
Since we support multiple maps in the same section, the section_index is
no longer a unique way to identify maps. This commit uses the symbol
index as the identifier, but falls back to section_index for rodata
and bss maps since we don't retrieve the symbol_index during parsing.
Signed-off-by: Dave Tucker <dave@dtucker.co.uk>
Files changed:\nM aya/src/generated/linux_bindings_aarch64.rs
M aya/src/generated/linux_bindings_armv7.rs
M aya/src/generated/linux_bindings_x86_64.rs
M bpf/aya-bpf-bindings/src/aarch64/bindings.rs
M bpf/aya-bpf-bindings/src/aarch64/getters.rs
M bpf/aya-bpf-bindings/src/aarch64/helpers.rs
M bpf/aya-bpf-bindings/src/armv7/bindings.rs
M bpf/aya-bpf-bindings/src/armv7/getters.rs
M bpf/aya-bpf-bindings/src/armv7/helpers.rs
M bpf/aya-bpf-bindings/src/x86_64/bindings.rs
M bpf/aya-bpf-bindings/src/x86_64/getters.rs
M bpf/aya-bpf-bindings/src/x86_64/helpers.rs
It's a workaround for the upstream bindgen issue:
https://github.com/rust-lang/rust-bindgen/issues/2083
tl;dr: Rust nightly complains about #[repr(packed)] structs deriving
Debug without Copy.
It needs to be fixed properly upstream, but for now we have to disable
Debug derive here.
Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
This forces all maps to the maps section so we remain compatible with
libbpf. This requires #181 to avoid breaking userspace.
Signed-off-by: Dave Tucker <dave@dtucker.co.uk>
Remove LinkRef and remove the Rc<RefCell<_>> that was used to store
type-erased link values in ProgramData. Among other things, this allows
`Bpf` to be `Send`, which makes it easier to use it with async runtimes.
Change the link API to:
let link_id = prog.attach(...)?;
...
prog.detach(link_id)?;
Link ids are strongly typed, so it's impossible to eg:
let link_id = uprobe.attach(...)?;
xdp.detach(link_id);
As it would result in a compile time error.
Links are still stored inside ProgramData, and unless detached
explicitly, they are automatically detached when the parent program gets
dropped.
This commit uses the symbol table to discover all maps inside an ELF
section. Instead of doing what libbpf does - divide the section data
in to equal sized chunks - we read in to section data using the
symbol address and offset, thus allowing us to support definitions
of varying lengths.
Signed-off-by: Dave Tucker <dave@dtucker.co.uk>
This changes PerfBuffer::read_events() to call BytesMut::reserve()
internally, and deprecates PerfBufferError::MoreSpaceNeeded.
This makes for a more ergonomic API, and allows for a more idiomatic
usage of BytesMut. For example consider:
let mut buffers = vec![BytesMut::with_capacity(N), ...];
loop {
let events = oob_cpu_buf.read_events(&mut buffers).unwrap();
for buf in &mut buffers[..events.read] {
let sub: Bytes = buf.split_off(n).into();
process_sub_buf(sub);
}
...
}
This is a common way to process perf bufs, where a sub buffer is split
off from the original buffer and then processed. In the next iteration
of the loop when it's time to read again, two things can happen:
- if processing of the sub buffer is complete and `sub` has been
dropped, read_events() will call buf.reserve(sample_size) and hit a fast
path in BytesMut that will just restore the original capacity of the
buffer (assuming sample_size <= N).
- if processing of the sub buffer hasn't ended (eg the buffer has been
stored or is being processed in another thread),
buf.reserve(sample_size) will actually allocate the new memory required
to read the sample.
In other words, calling buf.reserve(sample_size) inside read_events()
simplifies doing zero-copy processing of buffers in many cases.