diff options
| author | Jinjie Ruan <ruanjinjie@huawei.com> | 2025-08-15 11:06:33 +0800 |
|---|---|---|
| committer | Will Deacon <will@kernel.org> | 2025-09-11 15:55:35 +0100 |
| commit | b3cf07851b6c4aa8683557905cd898da9ae8c634 (patch) | |
| tree | 97d9d9db69d4a5744dabd286afd93c2932c3fbce /arch/arm64/include/asm/preempt.h | |
| parent | 99eb057ccd675b2f0fc71a362553164c65c349a2 (diff) | |
arm64: entry: Switch to generic IRQ entry
Currently, x86, Riscv and Loongarch use the generic entry code, which
makes maintainer's work easier and code more elegant. Start converting
arm64 to use the generic entry infrastructure from kernel/entry/* by
switching it to generic IRQ entry, which removes 100+ lines of duplicate
code. arm64 will completely switch to generic entry in a later series.
The changes are below:
- Remove *enter_from/exit_to_kernel_mode(), and wrap with generic
irqentry_enter/exit() as their code and functionality are almost
identical.
- Define ARCH_EXIT_TO_USER_MODE_WORK and implement
arch_exit_to_user_mode_work() to check arm64-specific thread flags
"_TIF_MTE_ASYNC_FAULT" and "_TIF_FOREIGN_FPSTATE".
So also remove *enter_from/exit_to_user_mode(), and wrap with
generic enter_from/exit_to_user_mode() because they are
exactly the same.
- Remove arm64_enter/exit_nmi() and use generic irqentry_nmi_enter/exit()
because they're exactly the same, so the temporary arm64 version
irqentry_state can also be removed.
- Remove PREEMPT_DYNAMIC code, as generic irqentry_exit_cond_resched()
has the same functionality.
- Implement arch_irqentry_exit_need_resched() with
arm64_preempt_schedule_irq() for arm64 which will allow arm64 to do
its architecture specific checks.
Tested-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Suggested-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to 'arch/arm64/include/asm/preempt.h')
| -rw-r--r-- | arch/arm64/include/asm/preempt.h | 8 |
1 files changed, 0 insertions, 8 deletions
diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h index c2437ea0790f..932ea4b62042 100644 --- a/arch/arm64/include/asm/preempt.h +++ b/arch/arm64/include/asm/preempt.h @@ -2,7 +2,6 @@ #ifndef __ASM_PREEMPT_H #define __ASM_PREEMPT_H -#include <linux/jump_label.h> #include <linux/thread_info.h> #define PREEMPT_NEED_RESCHED BIT(32) @@ -85,26 +84,19 @@ static inline bool should_resched(int preempt_offset) void preempt_schedule(void); void preempt_schedule_notrace(void); -void raw_irqentry_exit_cond_resched(void); #ifdef CONFIG_PREEMPT_DYNAMIC -DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); void dynamic_preempt_schedule(void); #define __preempt_schedule() dynamic_preempt_schedule() void dynamic_preempt_schedule_notrace(void); #define __preempt_schedule_notrace() dynamic_preempt_schedule_notrace() -void dynamic_irqentry_exit_cond_resched(void); -#define irqentry_exit_cond_resched() dynamic_irqentry_exit_cond_resched() #else /* CONFIG_PREEMPT_DYNAMIC */ #define __preempt_schedule() preempt_schedule() #define __preempt_schedule_notrace() preempt_schedule_notrace() -#define irqentry_exit_cond_resched() raw_irqentry_exit_cond_resched() #endif /* CONFIG_PREEMPT_DYNAMIC */ -#else /* CONFIG_PREEMPTION */ -#define irqentry_exit_cond_resched() {} #endif /* CONFIG_PREEMPTION */ #endif /* __ASM_PREEMPT_H */ |