Skip to content

Commit d2dcc67

Browse files
davemarchevskyAlexei Starovoitov
authored andcommitted
bpf: Migrate bpf_rbtree_add and bpf_list_push_{front,back} to possibly fail
Consider this code snippet: struct node { long key; bpf_list_node l; bpf_rb_node r; bpf_refcount ref; } int some_bpf_prog(void *ctx) { struct node *n = bpf_obj_new(/*...*/), *m; bpf_spin_lock(&glock); bpf_rbtree_add(&some_tree, &n->r, /* ... */); m = bpf_refcount_acquire(n); bpf_rbtree_add(&other_tree, &m->r, /* ... */); bpf_spin_unlock(&glock); /* ... */ } After bpf_refcount_acquire, n and m point to the same underlying memory, and that node's bpf_rb_node field is being used by the some_tree insert, so overwriting it as a result of the second insert is an error. In order to properly support refcounted nodes, the rbtree and list insert functions must be allowed to fail. This patch adds such support. The kfuncs bpf_rbtree_add, bpf_list_push_{front,back} are modified to return an int indicating success/failure, with 0 -> success, nonzero -> failure. bpf_obj_drop on failure ======================= Currently the only reason an insert can fail is the example above: the bpf_{list,rb}_node is already in use. When such a failure occurs, the insert kfuncs will bpf_obj_drop the input node. This allows the insert operations to logically fail without changing their verifier owning ref behavior, namely the unconditional release_reference of the input owning ref. With insert that always succeeds, ownership of the node is always passed to the collection, since the node always ends up in the collection. With a possibly-failed insert w/ bpf_obj_drop, ownership of the node is always passed either to the collection (success), or to bpf_obj_drop (failure). Regardless, it's correct to continue unconditionally releasing the input owning ref, as something is always taking ownership from the calling program on insert. Keeping owning ref behavior unchanged results in a nice default UX for insert functions that can fail. If the program's reaction to a failed insert is "fine, just get rid of this owning ref for me and let me go on with my business", then there's no reason to check for failure since that's default behavior. e.g.: long important_failures = 0; int some_bpf_prog(void *ctx) { struct node *n, *m, *o; /* all bpf_obj_new'd */ bpf_spin_lock(&glock); bpf_rbtree_add(&some_tree, &n->node, /* ... */); bpf_rbtree_add(&some_tree, &m->node, /* ... */); if (bpf_rbtree_add(&some_tree, &o->node, /* ... */)) { important_failures++; } bpf_spin_unlock(&glock); } If we instead chose to pass ownership back to the program on failed insert - by returning NULL on success or an owning ref on failure - programs would always have to do something with the returned ref on failure. The most likely action is probably "I'll just get rid of this owning ref and go about my business", which ideally would look like: if (n = bpf_rbtree_add(&some_tree, &n->node, /* ... */)) bpf_obj_drop(n); But bpf_obj_drop isn't allowed in a critical section and inserts must occur within one, so in reality error handling would become a hard-to-parse mess. For refcounted nodes, we can replicate the "pass ownership back to program on failure" logic with this patch's semantics, albeit in an ugly way: struct node *n = bpf_obj_new(/* ... */), *m; bpf_spin_lock(&glock); m = bpf_refcount_acquire(n); if (bpf_rbtree_add(&some_tree, &n->node, /* ... */)) { /* Do something with m */ } bpf_spin_unlock(&glock); bpf_obj_drop(m); bpf_refcount_acquire is used to simulate "return owning ref on failure". This should be an uncommon occurrence, though. Addition of two verifier-fixup'd args to collection inserts =========================================================== The actual bpf_obj_drop kfunc is bpf_obj_drop_impl(void *, struct btf_struct_meta *), with bpf_obj_drop macro populating the second arg with 0 and the verifier later filling in the arg during insn fixup. Because bpf_rbtree_add and bpf_list_push_{front,back} now might do bpf_obj_drop, these kfuncs need a btf_struct_meta parameter that can be passed to bpf_obj_drop_impl. Similarly, because the 'node' param to those insert functions is the bpf_{list,rb}_node within the node type, and bpf_obj_drop expects a pointer to the beginning of the node, the insert functions need to be able to find the beginning of the node struct. A second verifier-populated param is necessary: the offset of {list,rb}_node within the node type. These two new params allow the insert kfuncs to correctly call __bpf_obj_drop_impl: beginning_of_node = bpf_rb_node_ptr - offset if (already_inserted) __bpf_obj_drop_impl(beginning_of_node, btf_struct_meta->record); Similarly to other kfuncs with "hidden" verifier-populated params, the insert functions are renamed with _impl prefix and a macro is provided for common usage. For example, bpf_rbtree_add kfunc is now bpf_rbtree_add_impl and bpf_rbtree_add is now a macro which sets "hidden" args to 0. Due to the two new args BPF progs will need to be recompiled to work with the new _impl kfuncs. This patch also rewrites the "hidden argument" explanation to more directly say why the BPF program writer doesn't need to populate the arguments with anything meaningful. How does this new logic affect non-owning references? ===================================================== Currently, non-owning refs are valid until the end of the critical section in which they're created. We can make this guarantee because, if a non-owning ref exists, the referent was added to some collection. The collection will drop() its nodes when it goes away, but it can't go away while our program is accessing it, so that's not a problem. If the referent is removed from the collection in the same CS that it was added in, it can't be bpf_obj_drop'd until after CS end. Those are the only two ways to free the referent's memory and neither can happen until after the non-owning ref's lifetime ends. On first glance, having these collection insert functions potentially bpf_obj_drop their input seems like it breaks the "can't be bpf_obj_drop'd until after CS end" line of reasoning. But we care about the memory not being _freed_ until end of CS end, and a previous patch in the series modified bpf_obj_drop such that it doesn't free refcounted nodes until refcount == 0. So the statement can be more accurately rewritten as "can't be free'd until after CS end". We can prove that this rewritten statement holds for any non-owning reference produced by collection insert functions: * If the input to the insert function is _not_ refcounted * We have an owning reference to the input, and can conclude it isn't in any collection * Inserting a node in a collection turns owning refs into non-owning, and since our input type isn't refcounted, there's no way to obtain additional owning refs to the same underlying memory * Because our node isn't in any collection, the insert operation cannot fail, so bpf_obj_drop will not execute * If bpf_obj_drop is guaranteed not to execute, there's no risk of memory being free'd * Otherwise, the input to the insert function is refcounted * If the insert operation fails due to the node's list_head or rb_root already being in some collection, there was some previous successful insert which passed refcount to the collection * We have an owning reference to the input, it must have been acquired via bpf_refcount_acquire, which bumped the refcount * refcount must be >= 2 since there's a valid owning reference and the node is already in a collection * Insert triggering bpf_obj_drop will decr refcount to >= 1, never resulting in a free So although we may do bpf_obj_drop during the critical section, this will never result in memory being free'd, and no changes to non-owning ref logic are needed in this patch. Signed-off-by: Dave Marchevsky <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
1 parent 7c50b1c commit d2dcc67

File tree

4 files changed

+148
-51
lines changed

4 files changed

+148
-51
lines changed

include/linux/bpf_verifier.h

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -464,7 +464,12 @@ struct bpf_insn_aux_data {
464464
*/
465465
struct bpf_loop_inline_state loop_inline_state;
466466
};
467-
u64 obj_new_size; /* remember the size of type passed to bpf_obj_new to rewrite R1 */
467+
union {
468+
/* remember the size of type passed to bpf_obj_new to rewrite R1 */
469+
u64 obj_new_size;
470+
/* remember the offset of node field within type to rewrite */
471+
u64 insert_off;
472+
};
468473
struct btf_struct_meta *kptr_struct_meta;
469474
u64 map_key_state; /* constant (32 bit) key tracking for maps */
470475
int ctx_field_size; /* the ctx field size for load insn, maybe 0 */

kernel/bpf/helpers.c

Lines changed: 48 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1931,25 +1931,44 @@ __bpf_kfunc void *bpf_refcount_acquire_impl(void *p__refcounted_kptr, void *meta
19311931
return (void *)p__refcounted_kptr;
19321932
}
19331933

1934-
static void __bpf_list_add(struct bpf_list_node *node, struct bpf_list_head *head, bool tail)
1934+
static int __bpf_list_add(struct bpf_list_node *node, struct bpf_list_head *head,
1935+
bool tail, struct btf_record *rec, u64 off)
19351936
{
19361937
struct list_head *n = (void *)node, *h = (void *)head;
19371938

19381939
if (unlikely(!h->next))
19391940
INIT_LIST_HEAD(h);
19401941
if (unlikely(!n->next))
19411942
INIT_LIST_HEAD(n);
1943+
if (!list_empty(n)) {
1944+
/* Only called from BPF prog, no need to migrate_disable */
1945+
__bpf_obj_drop_impl(n - off, rec);
1946+
return -EINVAL;
1947+
}
1948+
19421949
tail ? list_add_tail(n, h) : list_add(n, h);
1950+
1951+
return 0;
19431952
}
19441953

1945-
__bpf_kfunc void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node)
1954+
__bpf_kfunc int bpf_list_push_front_impl(struct bpf_list_head *head,
1955+
struct bpf_list_node *node,
1956+
void *meta__ign, u64 off)
19461957
{
1947-
return __bpf_list_add(node, head, false);
1958+
struct btf_struct_meta *meta = meta__ign;
1959+
1960+
return __bpf_list_add(node, head, false,
1961+
meta ? meta->record : NULL, off);
19481962
}
19491963

1950-
__bpf_kfunc void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node)
1964+
__bpf_kfunc int bpf_list_push_back_impl(struct bpf_list_head *head,
1965+
struct bpf_list_node *node,
1966+
void *meta__ign, u64 off)
19511967
{
1952-
return __bpf_list_add(node, head, true);
1968+
struct btf_struct_meta *meta = meta__ign;
1969+
1970+
return __bpf_list_add(node, head, true,
1971+
meta ? meta->record : NULL, off);
19531972
}
19541973

19551974
static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head, bool tail)
@@ -1989,14 +2008,23 @@ __bpf_kfunc struct bpf_rb_node *bpf_rbtree_remove(struct bpf_rb_root *root,
19892008
/* Need to copy rbtree_add_cached's logic here because our 'less' is a BPF
19902009
* program
19912010
*/
1992-
static void __bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node,
1993-
void *less)
2011+
static int __bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node,
2012+
void *less, struct btf_record *rec, u64 off)
19942013
{
19952014
struct rb_node **link = &((struct rb_root_cached *)root)->rb_root.rb_node;
2015+
struct rb_node *parent = NULL, *n = (struct rb_node *)node;
19962016
bpf_callback_t cb = (bpf_callback_t)less;
1997-
struct rb_node *parent = NULL;
19982017
bool leftmost = true;
19992018

2019+
if (!n->__rb_parent_color)
2020+
RB_CLEAR_NODE(n);
2021+
2022+
if (!RB_EMPTY_NODE(n)) {
2023+
/* Only called from BPF prog, no need to migrate_disable */
2024+
__bpf_obj_drop_impl(n - off, rec);
2025+
return -EINVAL;
2026+
}
2027+
20002028
while (*link) {
20012029
parent = *link;
20022030
if (cb((uintptr_t)node, (uintptr_t)parent, 0, 0, 0)) {
@@ -2007,15 +2035,18 @@ static void __bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node,
20072035
}
20082036
}
20092037

2010-
rb_link_node((struct rb_node *)node, parent, link);
2011-
rb_insert_color_cached((struct rb_node *)node,
2012-
(struct rb_root_cached *)root, leftmost);
2038+
rb_link_node(n, parent, link);
2039+
rb_insert_color_cached(n, (struct rb_root_cached *)root, leftmost);
2040+
return 0;
20132041
}
20142042

2015-
__bpf_kfunc void bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node,
2016-
bool (less)(struct bpf_rb_node *a, const struct bpf_rb_node *b))
2043+
__bpf_kfunc int bpf_rbtree_add_impl(struct bpf_rb_root *root, struct bpf_rb_node *node,
2044+
bool (less)(struct bpf_rb_node *a, const struct bpf_rb_node *b),
2045+
void *meta__ign, u64 off)
20172046
{
2018-
__bpf_rbtree_add(root, node, (void *)less);
2047+
struct btf_struct_meta *meta = meta__ign;
2048+
2049+
return __bpf_rbtree_add(root, node, (void *)less, meta ? meta->record : NULL, off);
20192050
}
20202051

20212052
__bpf_kfunc struct bpf_rb_node *bpf_rbtree_first(struct bpf_rb_root *root)
@@ -2291,14 +2322,14 @@ BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE)
22912322
BTF_ID_FLAGS(func, bpf_obj_new_impl, KF_ACQUIRE | KF_RET_NULL)
22922323
BTF_ID_FLAGS(func, bpf_obj_drop_impl, KF_RELEASE)
22932324
BTF_ID_FLAGS(func, bpf_refcount_acquire_impl, KF_ACQUIRE)
2294-
BTF_ID_FLAGS(func, bpf_list_push_front)
2295-
BTF_ID_FLAGS(func, bpf_list_push_back)
2325+
BTF_ID_FLAGS(func, bpf_list_push_front_impl)
2326+
BTF_ID_FLAGS(func, bpf_list_push_back_impl)
22962327
BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL)
22972328
BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL)
22982329
BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
22992330
BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE)
23002331
BTF_ID_FLAGS(func, bpf_rbtree_remove, KF_ACQUIRE)
2301-
BTF_ID_FLAGS(func, bpf_rbtree_add)
2332+
BTF_ID_FLAGS(func, bpf_rbtree_add_impl)
23022333
BTF_ID_FLAGS(func, bpf_rbtree_first, KF_RET_NULL)
23032334

23042335
#ifdef CONFIG_CGROUPS

kernel/bpf/verifier.c

Lines changed: 55 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -8500,10 +8500,10 @@ static int set_rbtree_add_callback_state(struct bpf_verifier_env *env,
85008500
struct bpf_func_state *callee,
85018501
int insn_idx)
85028502
{
8503-
/* void bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node,
8503+
/* void bpf_rbtree_add_impl(struct bpf_rb_root *root, struct bpf_rb_node *node,
85048504
* bool (less)(struct bpf_rb_node *a, const struct bpf_rb_node *b));
85058505
*
8506-
* 'struct bpf_rb_node *node' arg to bpf_rbtree_add is the same PTR_TO_BTF_ID w/ offset
8506+
* 'struct bpf_rb_node *node' arg to bpf_rbtree_add_impl is the same PTR_TO_BTF_ID w/ offset
85078507
* that 'less' callback args will be receiving. However, 'node' arg was release_reference'd
85088508
* by this point, so look at 'root'
85098509
*/
@@ -9571,16 +9571,16 @@ enum special_kfunc_type {
95719571
KF_bpf_obj_new_impl,
95729572
KF_bpf_obj_drop_impl,
95739573
KF_bpf_refcount_acquire_impl,
9574-
KF_bpf_list_push_front,
9575-
KF_bpf_list_push_back,
9574+
KF_bpf_list_push_front_impl,
9575+
KF_bpf_list_push_back_impl,
95769576
KF_bpf_list_pop_front,
95779577
KF_bpf_list_pop_back,
95789578
KF_bpf_cast_to_kern_ctx,
95799579
KF_bpf_rdonly_cast,
95809580
KF_bpf_rcu_read_lock,
95819581
KF_bpf_rcu_read_unlock,
95829582
KF_bpf_rbtree_remove,
9583-
KF_bpf_rbtree_add,
9583+
KF_bpf_rbtree_add_impl,
95849584
KF_bpf_rbtree_first,
95859585
KF_bpf_dynptr_from_skb,
95869586
KF_bpf_dynptr_from_xdp,
@@ -9592,14 +9592,14 @@ BTF_SET_START(special_kfunc_set)
95929592
BTF_ID(func, bpf_obj_new_impl)
95939593
BTF_ID(func, bpf_obj_drop_impl)
95949594
BTF_ID(func, bpf_refcount_acquire_impl)
9595-
BTF_ID(func, bpf_list_push_front)
9596-
BTF_ID(func, bpf_list_push_back)
9595+
BTF_ID(func, bpf_list_push_front_impl)
9596+
BTF_ID(func, bpf_list_push_back_impl)
95979597
BTF_ID(func, bpf_list_pop_front)
95989598
BTF_ID(func, bpf_list_pop_back)
95999599
BTF_ID(func, bpf_cast_to_kern_ctx)
96009600
BTF_ID(func, bpf_rdonly_cast)
96019601
BTF_ID(func, bpf_rbtree_remove)
9602-
BTF_ID(func, bpf_rbtree_add)
9602+
BTF_ID(func, bpf_rbtree_add_impl)
96039603
BTF_ID(func, bpf_rbtree_first)
96049604
BTF_ID(func, bpf_dynptr_from_skb)
96059605
BTF_ID(func, bpf_dynptr_from_xdp)
@@ -9611,16 +9611,16 @@ BTF_ID_LIST(special_kfunc_list)
96119611
BTF_ID(func, bpf_obj_new_impl)
96129612
BTF_ID(func, bpf_obj_drop_impl)
96139613
BTF_ID(func, bpf_refcount_acquire_impl)
9614-
BTF_ID(func, bpf_list_push_front)
9615-
BTF_ID(func, bpf_list_push_back)
9614+
BTF_ID(func, bpf_list_push_front_impl)
9615+
BTF_ID(func, bpf_list_push_back_impl)
96169616
BTF_ID(func, bpf_list_pop_front)
96179617
BTF_ID(func, bpf_list_pop_back)
96189618
BTF_ID(func, bpf_cast_to_kern_ctx)
96199619
BTF_ID(func, bpf_rdonly_cast)
96209620
BTF_ID(func, bpf_rcu_read_lock)
96219621
BTF_ID(func, bpf_rcu_read_unlock)
96229622
BTF_ID(func, bpf_rbtree_remove)
9623-
BTF_ID(func, bpf_rbtree_add)
9623+
BTF_ID(func, bpf_rbtree_add_impl)
96249624
BTF_ID(func, bpf_rbtree_first)
96259625
BTF_ID(func, bpf_dynptr_from_skb)
96269626
BTF_ID(func, bpf_dynptr_from_xdp)
@@ -9954,15 +9954,15 @@ static int check_reg_allocation_locked(struct bpf_verifier_env *env, struct bpf_
99549954

99559955
static bool is_bpf_list_api_kfunc(u32 btf_id)
99569956
{
9957-
return btf_id == special_kfunc_list[KF_bpf_list_push_front] ||
9958-
btf_id == special_kfunc_list[KF_bpf_list_push_back] ||
9957+
return btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
9958+
btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
99599959
btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
99609960
btf_id == special_kfunc_list[KF_bpf_list_pop_back];
99619961
}
99629962

99639963
static bool is_bpf_rbtree_api_kfunc(u32 btf_id)
99649964
{
9965-
return btf_id == special_kfunc_list[KF_bpf_rbtree_add] ||
9965+
return btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl] ||
99669966
btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
99679967
btf_id == special_kfunc_list[KF_bpf_rbtree_first];
99689968
}
@@ -9975,7 +9975,7 @@ static bool is_bpf_graph_api_kfunc(u32 btf_id)
99759975

99769976
static bool is_callback_calling_kfunc(u32 btf_id)
99779977
{
9978-
return btf_id == special_kfunc_list[KF_bpf_rbtree_add];
9978+
return btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl];
99799979
}
99809980

99819981
static bool is_rbtree_lock_required_kfunc(u32 btf_id)
@@ -10016,12 +10016,12 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
1001610016

1001710017
switch (node_field_type) {
1001810018
case BPF_LIST_NODE:
10019-
ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front] ||
10020-
kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back]);
10019+
ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
10020+
kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl]);
1002110021
break;
1002210022
case BPF_RB_NODE:
1002310023
ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
10024-
kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_add]);
10024+
kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl]);
1002510025
break;
1002610026
default:
1002710027
verbose(env, "verifier internal error: unexpected graph node argument type %s\n",
@@ -10702,10 +10702,11 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
1070210702
}
1070310703
}
1070410704

10705-
if (meta.func_id == special_kfunc_list[KF_bpf_list_push_front] ||
10706-
meta.func_id == special_kfunc_list[KF_bpf_list_push_back] ||
10707-
meta.func_id == special_kfunc_list[KF_bpf_rbtree_add]) {
10705+
if (meta.func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
10706+
meta.func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
10707+
meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
1070810708
release_ref_obj_id = regs[BPF_REG_2].ref_obj_id;
10709+
insn_aux->insert_off = regs[BPF_REG_2].off;
1070910710
err = ref_convert_owning_non_owning(env, release_ref_obj_id);
1071010711
if (err) {
1071110712
verbose(env, "kfunc %s#%d conversion of owning ref to non-owning failed\n",
@@ -10721,7 +10722,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
1072110722
}
1072210723
}
1072310724

10724-
if (meta.func_id == special_kfunc_list[KF_bpf_rbtree_add]) {
10725+
if (meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
1072510726
err = __check_func_call(env, insn, insn_idx_p, meta.subprogno,
1072610727
set_rbtree_add_callback_state);
1072710728
if (err) {
@@ -14764,7 +14765,7 @@ static bool regs_exact(const struct bpf_reg_state *rold,
1476414765
const struct bpf_reg_state *rcur,
1476514766
struct bpf_id_pair *idmap)
1476614767
{
14767-
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
14768+
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
1476814769
check_ids(rold->id, rcur->id, idmap) &&
1476914770
check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
1477014771
}
@@ -17407,6 +17408,23 @@ static void specialize_kfunc(struct bpf_verifier_env *env,
1740717408
}
1740817409
}
1740917410

17411+
static void __fixup_collection_insert_kfunc(struct bpf_insn_aux_data *insn_aux,
17412+
u16 struct_meta_reg,
17413+
u16 node_offset_reg,
17414+
struct bpf_insn *insn,
17415+
struct bpf_insn *insn_buf,
17416+
int *cnt)
17417+
{
17418+
struct btf_struct_meta *kptr_struct_meta = insn_aux->kptr_struct_meta;
17419+
struct bpf_insn addr[2] = { BPF_LD_IMM64(struct_meta_reg, (long)kptr_struct_meta) };
17420+
17421+
insn_buf[0] = addr[0];
17422+
insn_buf[1] = addr[1];
17423+
insn_buf[2] = BPF_MOV64_IMM(node_offset_reg, insn_aux->insert_off);
17424+
insn_buf[3] = *insn;
17425+
*cnt = 4;
17426+
}
17427+
1741017428
static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
1741117429
struct bpf_insn *insn_buf, int insn_idx, int *cnt)
1741217430
{
@@ -17453,6 +17471,20 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
1745317471
insn_buf[1] = addr[1];
1745417472
insn_buf[2] = *insn;
1745517473
*cnt = 3;
17474+
} else if (desc->func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
17475+
desc->func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
17476+
desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
17477+
int struct_meta_reg = BPF_REG_3;
17478+
int node_offset_reg = BPF_REG_4;
17479+
17480+
/* rbtree_add has extra 'less' arg, so args-to-fixup are in diff regs */
17481+
if (desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
17482+
struct_meta_reg = BPF_REG_4;
17483+
node_offset_reg = BPF_REG_5;
17484+
}
17485+
17486+
__fixup_collection_insert_kfunc(&env->insn_aux_data[insn_idx], struct_meta_reg,
17487+
node_offset_reg, insn, insn_buf, cnt);
1745617488
} else if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
1745717489
desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
1745817490
insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);

0 commit comments

Comments
 (0)