diff options
| author | Puranjay Mohan <puranjay@kernel.org> | 2025-08-27 11:32:43 +0000 |
|---|---|---|
| committer | Alexei Starovoitov <ast@kernel.org> | 2025-08-27 17:16:22 -0700 |
| commit | 16175375da369fcfdcc0127c9ca39b767ae4f885 (patch) | |
| tree | b34141ab6b6b750251ea27f3d1e421586fd2235b /arch/arm64/net/bpf_jit_comp.c | |
| parent | 4c229f337e9c38ecee8d4ca5bb9006f63217ebdd (diff) | |
bpf, arm64: Add JIT support for timed may_goto
When verifier sees a timed may_goto instruction, it emits a call to
arch_bpf_timed_may_goto() with a stack offset in BPF_REG_AX (arm64 r9)
and expects a count value to be returned in the same register. The
verifier doesn't save or restore any registers before emitting this
call.
arch_bpf_timed_may_goto() should act as a trampoline to call
bpf_check_timed_may_goto() with AAPCS64 calling convention.
To support this custom calling convention, implement
arch_bpf_timed_may_goto() in assembly and make sure BPF caller saved
registers are saved and restored, call bpf_check_timed_may_goto with
arm64 calling convention where first argument and return value both are
in x0, then put the result back into BPF_REG_AX before returning.
Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Xu Kuohai <xukuohai@huawei.com>
Link: https://lore.kernel.org/r/20250827113245.52629-2-puranjay@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'arch/arm64/net/bpf_jit_comp.c')
| -rw-r--r-- | arch/arm64/net/bpf_jit_comp.c | 13 |
1 files changed, 12 insertions, 1 deletions
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 52ffe115a8c4..a98b8132479a 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -1558,7 +1558,13 @@ emit_cond_jmp: if (ret < 0) return ret; emit_call(func_addr, ctx); - emit(A64_MOV(1, r0, A64_R(0)), ctx); + /* + * Call to arch_bpf_timed_may_goto() is emitted by the + * verifier and called with custom calling convention with + * first argument and return value in BPF_REG_AX (x9). + */ + if (func_addr != (u64)arch_bpf_timed_may_goto) + emit(A64_MOV(1, r0, A64_R(0)), ctx); break; } /* tail call */ @@ -3038,6 +3044,11 @@ bool bpf_jit_bypass_spec_v4(void) return true; } +bool bpf_jit_supports_timed_may_goto(void) +{ + return true; +} + bool bpf_jit_inlines_helper_call(s32 imm) { switch (imm) { |