summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/kvm_nested.h
diff options
context:
space:
mode:
authorMarc Zyngier <maz@kernel.org>2025-05-14 11:34:56 +0100
committerMarc Zyngier <maz@kernel.org>2025-05-19 08:01:19 +0100
commit4ffa72ad8f37e73bbb6c0baa88557bcb4fd39929 (patch)
tree7c0608041e2f3615359de433e81c347356e53ca3 /arch/arm64/include/asm/kvm_nested.h
parent73e1b621b25d7d45cebf9940cad3b377d3b94791 (diff)
KVM: arm64: nv: Add S1 TLB invalidation primitive for VNCR_EL2
A TLBI by VA for S1 must take effect on our pseudo-TLB for VNCR and potentially knock the fixmap mapping. Even worse, that TLBI must be able to work cross-vcpu. For that, we track on a per-VM basis if any VNCR is mapped, using an atomic counter. Whenever a TLBI S1E2 occurs and that this counter is non-zero, we take the long road all the way back to the core code. There, we iterate over all vcpus and check whether this particular invalidation has any damaging effect. If it does, we nuke the pseudo TLB and the corresponding fixmap. Yes, this is costly. Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20250514103501.2225951-14-maz@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
Diffstat (limited to 'arch/arm64/include/asm/kvm_nested.h')
-rw-r--r--arch/arm64/include/asm/kvm_nested.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index ea50cad1a6a2..0bd07ea068a1 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -336,6 +336,7 @@ int __kvm_translate_va(struct kvm_vcpu *vcpu, struct s1_walk_info *wi,
/* VNCR management */
int kvm_vcpu_allocate_vncr_tlb(struct kvm_vcpu *vcpu);
int kvm_handle_vncr_abort(struct kvm_vcpu *vcpu);
+void kvm_handle_s1e2_tlbi(struct kvm_vcpu *vcpu, u32 inst, u64 val);
#define vncr_fixmap(c) \
({ \