summaryrefslogtreecommitdiff
path: root/arch/x86/kvm
AgeCommit message (Collapse)Author
5 daysMerge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull KVM updates from Paolo Bonzini: "ARM: - Support for userspace handling of synchronous external aborts (SEAs), allowing the VMM to potentially handle the abort in a non-fatal manner - Large rework of the VGIC's list register handling with the goal of supporting more active/pending IRQs than available list registers in hardware. In addition, the VGIC now supports EOImode==1 style deactivations for IRQs which may occur on a separate vCPU than the one that acked the IRQ - Support for FEAT_XNX (user / privileged execute permissions) and FEAT_HAF (hardware update to the Access Flag) in the software page table walkers and shadow MMU - Allow page table destruction to reschedule, fixing long need_resched latencies observed when destroying a large VM - Minor fixes to KVM and selftests Loongarch: - Get VM PMU capability from HW GCFG register - Add AVEC basic support - Use 64-bit register definition for EIOINTC - Add KVM timer test cases for tools/selftests RISC/V: - SBI message passing (MPXY) support for KVM guest - Give a new, more specific error subcode for the case when in-kernel AIA virtualization fails to allocate IMSIC VS-file - Support KVM_DIRTY_LOG_INITIALLY_SET, enabling dirty log gradually in small chunks - Fix guest page fault within HLV* instructions - Flush VS-stage TLB after VCPU migration for Andes cores s390: - Always allocate ESCA (Extended System Control Area), instead of starting with the basic SCA and converting to ESCA with the addition of the 65th vCPU. The price is increased number of exits (and worse performance) on z10 and earlier processor; ESCA was introduced by z114/z196 in 2010 - VIRT_XFER_TO_GUEST_WORK support - Operation exception forwarding support - Cleanups x86: - Skip the costly "zap all SPTEs" on an MMIO generation wrap if MMIO SPTE caching is disabled, as there can't be any relevant SPTEs to zap - Relocate a misplaced export - Fix an async #PF bug where KVM would clear the completion queue when the guest transitioned in and out of paging mode, e.g. when handling an SMI and then returning to paged mode via RSM - Leave KVM's user-return notifier registered even when disabling virtualization, as long as kvm.ko is loaded. On reboot/shutdown, keeping the notifier registered is ok; the kernel does not use the MSRs and the callback will run cleanly and restore host MSRs if the CPU manages to return to userspace before the system goes down - Use the checked version of {get,put}_user() - Fix a long-lurking bug where KVM's lack of catch-up logic for periodic APIC timers can result in a hard lockup in the host - Revert the periodic kvmclock sync logic now that KVM doesn't use a clocksource that's subject to NTP corrections - Clean up KVM's handling of MMIO Stale Data and L1TF, and bury the latter behind CONFIG_CPU_MITIGATIONS - Context switch XCR0, XSS, and PKRU outside of the entry/exit fast path; the only reason they were handled in the fast path was to paper of a bug in the core #MC code, and that has long since been fixed - Add emulator support for AVX MOV instructions, to play nice with emulated devices whose guest drivers like to access PCI BARs with large multi-byte instructions x86 (AMD): - Fix a few missing "VMCB dirty" bugs - Fix the worst of KVM's lack of EFER.LMSLE emulation - Add AVIC support for addressing 4k vCPUs in x2AVIC mode - Fix incorrect handling of selective CR0 writes when checking intercepts during emulation of L2 instructions - Fix a currently-benign bug where KVM would clobber SPEC_CTRL[63:32] on VMRUN and #VMEXIT - Fix a bug where KVM corrupt the guest code stream when re-injecting a soft interrupt if the guest patched the underlying code after the VM-Exit, e.g. when Linux patches code with a temporary INT3 - Add KVM_X86_SNP_POLICY_BITS to advertise supported SNP policy bits to userspace, and extend KVM "support" to all policy bits that don't require any actual support from KVM x86 (Intel): - Use the root role from kvm_mmu_page to construct EPTPs instead of the current vCPU state, partly as worthwhile cleanup, but mostly to pave the way for tracking per-root TLB flushes, and elide EPT flushes on pCPU migration if the root is clean from a previous flush - Add a few missing nested consistency checks - Rip out support for doing "early" consistency checks via hardware as the functionality hasn't been used in years and is no longer useful in general; replace it with an off-by-default module param to WARN if hardware fails a check that KVM does not perform - Fix a currently-benign bug where KVM would drop the guest's SPEC_CTRL[63:32] on VM-Enter - Misc cleanups - Overhaul the TDX code to address systemic races where KVM (acting on behalf of userspace) could inadvertantly trigger lock contention in the TDX-Module; KVM was either working around these in weird, ugly ways, or was simply oblivious to them (though even Yan's devilish selftests could only break individual VMs, not the host kernel) - Fix a bug where KVM could corrupt a vCPU's cpu_list when freeing a TDX vCPU, if creating said vCPU failed partway through - Fix a few sparse warnings (bad annotation, 0 != NULL) - Use struct_size() to simplify copying TDX capabilities to userspace - Fix a bug where TDX would effectively corrupt user-return MSR values if the TDX Module rejects VP.ENTER and thus doesn't clobber host MSRs as expected Selftests: - Fix a math goof in mmu_stress_test when running on a single-CPU system/VM - Forcefully override ARCH from x86_64 to x86 to play nice with specifying ARCH=x86_64 on the command line - Extend a bunch of nested VMX to validate nested SVM as well - Add support for LA57 in the core VM_MODE_xxx macro, and add a test to verify KVM can save/restore nested VMX state when L1 is using 5-level paging, but L2 is not - Clean up the guest paging code in anticipation of sharing the core logic for nested EPT and nested NPT guest_memfd: - Add NUMA mempolicy support for guest_memfd, and clean up a variety of rough edges in guest_memfd along the way - Define a CLASS to automatically handle get+put when grabbing a guest_memfd from a memslot to make it harder to leak references - Enhance KVM selftests to make it easer to develop and debug selftests like those added for guest_memfd NUMA support, e.g. where test and/or KVM bugs often result in hard-to-debug SIGBUS errors - Misc cleanups Generic: - Use the recently-added WQ_PERCPU when creating the per-CPU workqueue for irqfd cleanup - Fix a goof in the dirty ring documentation - Fix choice of target for directed yield across different calls to kvm_vcpu_on_spin(); the function was always starting from the first vCPU instead of continuing the round-robin search" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (260 commits) KVM: arm64: at: Update AF on software walk only if VM has FEAT_HAFDBS KVM: arm64: at: Use correct HA bit in TCR_EL2 when regime is EL2 KVM: arm64: Document KVM_PGTABLE_PROT_{UX,PX} KVM: arm64: Fix spelling mistake "Unexpeced" -> "Unexpected" KVM: arm64: Add break to default case in kvm_pgtable_stage2_pte_prot() KVM: arm64: Add endian casting to kvm_swap_s[12]_desc() KVM: arm64: Fix compilation when CONFIG_ARM64_USE_LSE_ATOMICS=n KVM: arm64: selftests: Add test for AT emulation KVM: arm64: nv: Expose hardware access flag management to NV guests KVM: arm64: nv: Implement HW access flag management in stage-2 SW PTW KVM: arm64: Implement HW access flag management in stage-1 SW PTW KVM: arm64: Propagate PTW errors up to AT emulation KVM: arm64: Add helper for swapping guest descriptor KVM: arm64: nv: Use pgtable definitions in stage-2 walk KVM: arm64: Handle endianness in read helper for emulated PTW KVM: arm64: nv: Stop passing vCPU through void ptr in S2 PTW KVM: arm64: Call helper for reading descriptors directly KVM: arm64: nv: Advertise support for FEAT_XNX KVM: arm64: Teach ptdump about FEAT_XNX permissions KVM: s390: Use generic VIRT_XFER_TO_GUEST_WORK functions ...
8 daysMerge tag 'x86_cpu_for_6.19-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 CPU feature updates from Dave Hansen: "The biggest thing of note here is Linear Address Space Separation (LASS). It represents the first time I can think of that the upper=>kernel/lower=>user address space convention is actually recognized by the hardware on x86. It ensures that userspace can not even get the hardware to _start_ page walks for the kernel address space. This, of course, is a really nice generic side channel defense. This is really only a down payment on LASS support. There are still some details to work out in its interaction with EFI calls and vsyscall emulation. For now, LASS is disabled if either of those features is compiled in (which is almost always the case). There's also one straggler commit in here which converts an under-utilized AMD CPU feature leaf into a generic Linux-defined leaf so more feature can be packed in there. Summary: - Enable Linear Address Space Separation (LASS) - Change X86_FEATURE leaf 17 from an AMD leaf to Linux-defined" * tag 'x86_cpu_for_6.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/cpu: Enable LASS during CPU initialization selftests/x86: Update the negative vsyscall tests to expect a #GP x86/traps: Communicate a LASS violation in #GP message x86/kexec: Disable LASS during relocate kernel x86/alternatives: Disable LASS when patching kernel code x86/asm: Introduce inline memcpy and memset x86/cpu: Add an LASS dependency on SMAP x86/cpufeatures: Enumerate the LASS feature bits x86/cpufeatures: Make X86_FEATURE leaf 17 Linux-specific
2025-11-26Merge tag 'kvm-x86-svm-6.19' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini
KVM SVM changes for 6.19: - Fix a few missing "VMCB dirty" bugs. - Fix the worst of KVM's lack of EFER.LMSLE emulation. - Add AVIC support for addressing 4k vCPUs in x2AVIC mode. - Fix incorrect handling of selective CR0 writes when checking intercepts during emulation of L2 instructions. - Fix a currently-benign bug where KVM would clobber SPEC_CTRL[63:32] on VMRUN and #VMEXIT. - Fix a bug where KVM corrupt the guest code stream when re-injecting a soft interrupt if the guest patched the underlying code after the VM-Exit, e.g. when Linux patches code with a temporary INT3. - Add KVM_X86_SNP_POLICY_BITS to advertise supported SNP policy bits to userspace, and extend KVM "support" to all policy bits that don't require any actual support from KVM.
2025-11-26Merge tag 'kvm-x86-vmx-6.19' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini
KVM VMX changes for 6.19: - Use the root role from kvm_mmu_page to construct EPTPs instead of the current vCPU state, partly as worthwhile cleanup, but mostly to pave the way for tracking per-root TLB flushes so that KVM can elide EPT flushes on pCPU migration if KVM has flushed the root at least once. - Add a few missing nested consistency checks. - Rip out support for doing "early" consistency checks via hardware as the functionality hasn't been used in years and is no longer useful in general, and replace it with an off-by-default module param to detected missed consistency checks (i.e. WARN if hardware finds a check that KVM does not). - Fix a currently-benign bug where KVM would drop the guest's SPEC_CTRL[63:32] on VM-Enter. - Misc cleanups.
2025-11-26Merge tag 'kvm-x86-tdx-6.19' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini
KVM TDX changes for 6.19: - Overhaul the TDX code to address systemic races where KVM (acting on behalf of userspace) could inadvertantly trigger lock contention in the TDX-Module, which KVM was either working around in weird, ugly ways, or was simply oblivious to (as proven by Yan tripping several KVM_BUG_ON()s with clever selftests). - Fix a bug where KVM could corrupt a vCPU's cpu_list when freeing a vCPU if creating said vCPU failed partway through. - Fix a few sparse warnings (bad annotation, 0 != NULL). - Use struct_size() to simplify copying capabilities to userspace.
2025-11-26Merge tag 'kvm-x86-mmu-6.19' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini
KVM x86 MMU changes for 6.19: - Skip the costly "zap all SPTEs" on an MMIO generation wrap if MMIO SPTE caching is disabled, as there can't be any relevant SPTEs to zap. - Relocate a misplace export.
2025-11-26Merge tag 'kvm-x86-misc-6.19' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini
KVM x86 misc changes for 6.19: - Fix an async #PF bug where KVM would clear the completion queue when the guest transitioned in and out of paging mode, e.g. when handling an SMI and then returning to paged mode via RSM. - Fix a bug where TDX would effectively corrupt user-return MSR values if the TDX Module rejects VP.ENTER and thus doesn't clobber host MSRs as expected. - Leave the user-return notifier used to restore MSRs registered when disabling virtualization, and instead pin kvm.ko. Restoring host MSRs via IPI callback is either pointless (clean reboot) or dangerous (forced reboot) since KVM has no idea what code it's interrupting. - Use the checked version of {get,put}_user(), as Linus wants to kill them off, and they're measurably faster on modern CPUs due to the unchecked versions containing an LFENCE. - Fix a long-lurking bug where KVM's lack of catch-up logic for periodic APIC timers can result in a hard lockup in the host. - Revert the periodic kvmclock sync logic now that KVM doesn't use a clocksource that's subject to NPT corrections. - Clean up KVM's handling of MMIO Stale Data and L1TF, and bury the latter behind CONFIG_CPU_MITIGATIONS. - Context switch XCR0, XSS, and PKRU outside of the entry/exit fastpath as the only reason they were handled in the faspath was to paper of a bug in the core #MC code that has long since been fixed. - Add emulator support for AVX MOV instructions to play nice with emulated devices whose PCI BARs guest drivers like to access with large multi-byte instructions.
2025-11-20KVM: x86: Remove unused declaration kvm_mmu_may_ignore_guest_pat()Yue Haibing
Commit 3fee4837ef40 ("KVM: x86: remove shadow_memtype_mask") removed the functions but leave this declaration. Signed-off-by: Yue Haibing <yuehaibing@huawei.com> Link: https://patch.msgid.link/20251120120930.1448593-1-yuehaibing@huawei.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-20KVM: x86: Enable support for emulating AVX MOV instructionsPaolo Bonzini
Some users of KVM have emulated devices (typically added to private forks of QEMU) that execute AVX instructions on PCI BARs. Whenever the guest OS tries to do that, an illegal instruction exception or emulation failure is triggered. Add the Avx flag to move instructions: - (66) 0f 10 - MOVUPS/MOVUPD from memory - (66) 0f 11 - MOVUPS/MOVUPD to memory - 66 0f 6f - MOVDQA from memory - 66 0f 7f - MOVDQA to memory - f3 0f 6f - MOVDQU from memory - f3 0f 7f - MOVDQU to memory - (66) 0f 28 - MOVAPS/MOVAPD from memory - (66) 0f 29 - MOVAPS/MOVAPD to memory - (66) 0f 2b - MOVNTPS/MOVNTPD to memory - 66 0f e7 - MOVNTDQ to memory - 66 0f 38 2a - MOVNTDQA to memory Co-developed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Keith Busch <kbusch@kernel.org> Link: https://lore.kernel.org/kvm/BD108C42-0382-4B17-B601-434A4BD038E7@fb.com/T/ Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://patch.msgid.link/20251114003633.60689-11-pbonzini@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Add emulator support for decoding VEX prefixesPaolo Bonzini
After all the changes done in the previous patches, the only thing left to support AVX MOV instructions is to expand the VEX prefix into the appropriate REX, 66/F3/F2 and map prefixes. Three-operand instructions are not supported. The Avx bit in this case is not cleared, in fact it is used as the sign that the instruction does support VEX encoding. Until it is added to any instruction, however, the only functional change is to change some not-implemented instructions to #UD if they correspond to a VEX prefix with an invalid map. Co-developed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://patch.msgid.link/20251114003633.60689-10-pbonzini@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Refactor REX prefix handling in instruction emulationChang S. Bae
Restructure how to represent and interpret REX fields, preparing for handling of both REX2 and VEX. REX uses the upper four bits of a single byte as a fixed identifier, and the lower four bits containing the data. VEX and REX2 extends this so that the first byte identifies the prefix and the rest encode additional bits; and while VEX only has the same four data bits as REX, eight zero bits are a valid value for the data bits of REX2. So, stop storing the REX byte as-is. Instead, store only the low bits of the REX prefix and track separately whether a REX-like prefix was used. No functional changes intended. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Message-ID: <20251110180131.28264-11-chang.seok.bae@intel.com> [Extracted from APX series; removed bitfields and REX2-specific default. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://patch.msgid.link/20251114003633.60689-9-pbonzini@redhat.com [sean: name REX_{BXRW} enum "rex_bits"] Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Add AVX support to the emulator's register fetch and writebackPaolo Bonzini
Prepare struct operand for hosting AVX registers. Remove the existing, incomplete code that placed the Avx flag in the operand alignment field, and repurpose the name for a separate bit that indicates: - after decode, whether an instruction supports the VEX prefix; - before writeback, that the instruction did have the VEX prefix and therefore 1) it can have op_bytes == 32; 2) t should clear high bytes of XMM registers. Right now the bit will never be set and the patch has no intended functional change. However, this is actually more vexing than the decoder changes itself, and therefore worth separating. Co-developed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://patch.msgid.link/20251114003633.60689-8-pbonzini@redhat.com [sean: guard ymm[8-15] accesses with #ifdef CONFIG_X86_64] Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Add x86_emulate_ops.get_xcr() callbackPaolo Bonzini
This will be necessary in order to check whether AVX is enabled. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Chang S. Bae <chang.seok.bae@intel.com> Link: https://patch.msgid.link/20251114003633.60689-7-pbonzini@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Share emulator's common register decoding codePaolo Bonzini
Remove all duplicate handling of register operands, including picking the right register class and fetching it, by extracting a new function that can be used for both REG and MODRM operands. Centralize setting op->orig_val = op->val in fetch_register_operand() as well. No functional change intended. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Chang S. Bae <chang.seok.bae@intel.com> Link: https://patch.msgid.link/20251114003633.60689-6-pbonzini@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Move op_prefix to struct x86_emulate_ctxt (from x86_decode_insn())Paolo Bonzini
VEX decode will need to set it based on the "pp" bits, so make it a field in the struct rather than a local variable. No functional change intended. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Chang S. Bae <chang.seok.bae@intel.com> Link: https://patch.msgid.link/20251114003633.60689-5-pbonzini@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Improve formatting of the emulator's flags tablePaolo Bonzini
Align a little better the comments on the right side and list explicitly the bits used by multi-bit fields. No functional change intended. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Chang S. Bae <chang.seok.bae@intel.com> Link: https://patch.msgid.link/20251114003633.60689-4-pbonzini@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Move Src2Shift up one bit (use bits 36:32 for Src2 in the emulator)Paolo Bonzini
An irresistible microoptimization (changing accesses to Src2 to just an AND :)) that also frees a bit for AVX in the low flags word. This makes it closer to SSE since both of them can access XMM registers, pointlessly shaving another clock cycle or two (maybe). No functional change intended. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Chang S. Bae <chang.seok.bae@intel.com Link: https://patch.msgid.link/20251114003633.60689-3-pbonzini@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Add support for emulating MOVNTDQAPaolo Bonzini
MOVNTDQA is a simple MOV instruction, in fact it has the same characteristics as 0F E7 (MOVNTDQ) other than the aligned-address requirement. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://patch.msgid.link/20251114003633.60689-2-pbonzini@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Add a helper to dedup loading guest/host XCR0 and XSSBinbin Wu
Add and use a helper, kvm_load_xfeatures(), to dedup the code that loads guest/host xfeatures. Opportunistically return early if X86_CR4_OSXSAVE is not set to reduce indentations. No functional change intended. Suggested-by: Chao Gao <chao.gao@intel.com> Reviewed-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://patch.msgid.link/20251110050539.3398759-1-binbin.wu@linux.intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Load guest/host PKRU outside of the fastpath run loopSean Christopherson
Move KVM's swapping of PKRU outside of the fastpath loop, as there is no KVM code anywhere in the fastpath that accesses guest/userspace memory, i.e. that can consume protection keys. As documented by commit 1be0e61c1f25 ("KVM, pkeys: save/restore PKRU when guest/host switches"), KVM just needs to ensure the host's PKRU is loaded when KVM (or the kernel at-large) may access userspace memory. And at the time of commit 1be0e61c1f25, KVM didn't have a fastpath, and PKU was strictly contained to VMX, i.e. there was no reason to swap PKRU outside of vmx_vcpu_run(). Over time, the "need" to swap PKRU close to VM-Enter was likely falsely solidified by the association with XFEATUREs in commit 37486135d3a7 ("KVM: x86: Fix pkru save/restore when guest CR4.PKE=0, move it to x86.c"), and XFEATURE swapping was in turn moved close to VM-Enter/VM-Exit as a KVM hack-a-fix ution for an #MC handler bug by commit 1811d979c716 ("x86/kvm: move kvm_load/put_guest_xcr0 into atomic context"). Deferring the PKRU loads shaves ~40 cycles off the fastpath for Intel, and ~60 cycles for AMD. E.g. using INVD in KVM-Unit-Test's vmexit.c, with extra hacks to enable CR4.PKE and PKRU=(-1u & ~0x3), latency numbers for AMD Turin go from ~1560 => ~1500, and for Intel Emerald Rapids, go from ~810 => ~770. Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by: Jon Kohler <jon@nutanix.com> Link: https://patch.msgid.link/20251118222328.2265758-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: x86: Load guest/host XCR0 and XSS outside of the fastpath run loopSean Christopherson
Move KVM's swapping of XFEATURE masks, i.e. XCR0 and XSS, out of the fastpath loop now that the guts of the #MC handler runs in task context, i.e. won't invoke schedule() with preemption disabled and clobber state (or crash the kernel) due to trying to context switch XSTATE with a mix of host and guest state. For all intents and purposes, this reverts commit 1811d979c716 ("x86/kvm: move kvm_load/put_guest_xcr0 into atomic context"), which papered over an egregious bug/flaw in the #MC handler where it would do schedule() even though IRQs are disabled. E.g. the call stack from the commit: kvm_load_guest_xcr0 ... kvm_x86_ops->run(vcpu) vmx_vcpu_run vmx_complete_atomic_exit kvm_machine_check do_machine_check do_memory_failure memory_failure lock_page Commit 1811d979c716 "fixed" the immediate issue of XRSTORS exploding, but completely ignored that scheduling out a vCPU task while IRQs and preemption is wildly broken. Thankfully, commit 5567d11c21a1 ("x86/mce: Send #MC singal from task work") (somewhat incidentally?) fixed that flaw by pushing the meat of the work to the user-return path, i.e. to task context. KVM has also hardened itself against #MC goofs by moving #MC forwarding to kvm_x86_ops.handle_exit_irqoff(), i.e. out of the fastpath. While that's by no means a robust fix, restoring as much state as possible before handling the #MC will hopefully provide some measure of protection in the event that #MC handling goes off the rails again. Note, KVM always intercepts XCR0 writes for vCPUs without protected state, e.g. there's no risk of consuming a stale XCR0 when determining if a PKRU update is needed; kvm_load_host_xfeatures() only reads, and never writes, vcpu->arch.xcr0. Deferring the XCR0 and XSS loads shaves ~300 cycles off the fastpath for Intel, and ~500 cycles for AMD. E.g. using INVD in KVM-Unit-Test's vmexit.c, which an extra hack to enable CR4.OXSAVE, latency numbers for AMD Turin go from ~2000 => 1500, and for Intel Emerald Rapids, go from ~1300 => ~1000. Cc: Jon Kohler <jon@nutanix.com> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by: Jon Kohler <jon@nutanix.com> Link: https://patch.msgid.link/20251118222328.2265758-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-19KVM: VMX: Handle #MCs on VM-Enter/TD-Enter outside of the fastpathSean Christopherson
Handle Machine Checks (#MC) that happen on VM-Enter (VMX or TDX) outside of KVM's fastpath so that as much host state as possible is re-loaded before invoking the kernel's #MC handler. The only requirement is that KVM invokes the #MC handler before enabling IRQs (and even that could _probably_ be related to handling #MCs before enabling preemption). Waiting to handle #MCs until "more" host state is loaded hardens KVM against flaws in the #MC handler, which has historically been quite brittle. E.g. prior to commit 5567d11c21a1 ("x86/mce: Send #MC singal from task work"), the #MC code could trigger a schedule() with IRQs and preemption disabled. That led to a KVM hack-a-fix in commit 1811d979c716 ("x86/kvm: move kvm_load/put_guest_xcr0 into atomic context"). Note, vmx_handle_exit_irqoff() is common to VMX and TDX guests. Cc: Tony Lindgren <tony.lindgren@linux.intel.com> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Jon Kohler <jon@nutanix.com> Reviewed-by: Tony Lindgren <tony.lindgren@linux.intel.com> Link: https://patch.msgid.link/20251118222328.2265758-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-18KVM: SVM: Handle #MCs in guest outside of fastpathSean Christopherson
Handle Machine Checks (#MC) that happen in the guest (by forwarding them to the host) outside of KVM's fastpath so that as much host state as possible is re-loaded before invoking the kernel's #MC handler. The only requirement is that KVM invokes the #MC handler before enabling IRQs (and even that could _probably_ be relaxed to handling #MCs before enabling preemption). Waiting to handle #MCs until "more" host state is loaded hardens KVM against flaws in the #MC handler, which has historically been quite brittle. E.g. prior to commit 5567d11c21a1 ("x86/mce: Send #MC singal from task work"), the #MC code could trigger a schedule() with IRQs and preemption disabled. That led to a KVM hack-a-fix in commit 1811d979c716 ("x86/kvm: move kvm_load/put_guest_xcr0 into atomic context"). Note, except for #MCs on VM-Enter, VMX already handles #MCs outside of the fastpath. Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by: Jon Kohler <jon@nutanix.com> Link: https://patch.msgid.link/20251118222328.2265758-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-18KVM: x86: Unify L1TF flushing under per-CPU variableBrendan Jackman
Currently the tracking of the need to flush L1D for L1TF is tracked by two bits: one per-CPU and one per-vCPU. The per-vCPU bit is always set when the vCPU shows up on a core, so there is no interesting state that's truly per-vCPU. Indeed, this is a requirement, since L1D is a part of the physical CPU. So simplify this by combining the two bits. The vCPU bit was being written from preemption-enabled regions. To play nice with those cases, wrap all calls from KVM and use a raw write so that request a flush with preemption enabled doesn't trigger what would effectively be DEBUG_PREEMPT false positives. Preemption doesn't need to be disabled, as kvm_arch_vcpu_load() will mark the new CPU as needing a flush if the vCPU task is migrated, or if userspace runs the vCPU on a different task. Signed-off-by: Brendan Jackman <jackmanb@google.com> [sean: put raw write in KVM instead of in a hardirq.h variant] Link: https://patch.msgid.link/20251113233746.1703361-10-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-18KVM: VMX: Disable L1TF L1 data cache flush if CONFIG_CPU_MITIGATIONS=nSean Christopherson
Disable support for flushing the L1 data cache to mitigate L1TF if CPU mitigations are disabled for the entire kernel. KVM's mitigation of L1TF is in no way special enough to justify ignoring CONFIG_CPU_MITIGATIONS=n. Deliberately use CPU_MITIGATIONS instead of the more precise MITIGATION_L1TF, as MITIGATION_L1TF only controls the default behavior, i.e. CONFIG_MITIGATION_L1TF=n doesn't completely disable L1TF mitigations in the kernel. Keep the vmentry_l1d_flush module param to avoid breaking existing setups, and leverage the .set path to alert the user to the fact that vmentry_l1d_flush will be ignored. Don't bother validating the incoming value; if an admin misconfigures vmentry_l1d_flush, the fact that the bad configuration won't be detected when running with CONFIG_CPU_MITIGATIONS=n is likely the least of their worries. Reviewed-by: Brendan Jackman <jackmanb@google.com> Link: https://patch.msgid.link/20251113233746.1703361-9-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-18KVM: VMX: Bundle all L1 data cache flush mitigation code togetherSean Christopherson
Move vmx_l1d_flush(), vmx_cleanup_l1d_flush(), and the vmentry_l1d_flush param code up in vmx.c so that all of the L1 data cache flushing code is bundled together. This will allow conditioning the mitigation code on CONFIG_CPU_MITIGATIONS=y with minimal #ifdefs. No functional change intended. Reviewed-by: Brendan Jackman <jackmanb@google.com> Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Link: https://patch.msgid.link/20251113233746.1703361-8-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-18x86/bugs: KVM: Move VM_CLEAR_CPU_BUFFERS into SVM as SVM_CLEAR_CPU_BUFFERSSean Christopherson
Now that VMX encodes its own sequence for clearing CPU buffers, move VM_CLEAR_CPU_BUFFERS into SVM to minimize the chances of KVM botching a mitigation in the future, e.g. using VM_CLEAR_CPU_BUFFERS instead of checking multiple mitigation flags. No functional change intended. Reviewed-by: Brendan Jackman <jackmanb@google.com> Acked-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://patch.msgid.link/20251113233746.1703361-7-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-18KVM: VMX: Handle MMIO Stale Data in VM-Enter assembly via ALTERNATIVES_2Sean Christopherson
Rework the handling of the MMIO Stale Data mitigation to clear CPU buffers immediately prior to VM-Enter, i.e. in the same location that KVM emits a VERW for unconditional (at runtime) clearing. Co-locating the code and using a single ALTERNATIVES_2 makes it more obvious how VMX mitigates the various vulnerabilities. Deliberately order the alternatives as: 0. Do nothing 1. Clear if vCPU can access MMIO 2. Clear always since the last alternative wins in ALTERNATIVES_2(), i.e. so that KVM will honor the strictest mitigation (always clear CPU buffers) if multiple mitigations are selected. E.g. even if the kernel chooses to mitigate MMIO Stale Data via X86_FEATURE_CLEAR_CPU_BUF_VM_MMIO, another mitigation may enable X86_FEATURE_CLEAR_CPU_BUF_VM, and that other thing needs to win. Note, decoupling the MMIO mitigation from the L1TF mitigation also fixes a mostly-benign flaw where KVM wouldn't do any clearing/flushing if the L1TF mitigation is configured to conditionally flush the L1D, and the MMIO mitigation but not any other "clear CPU buffers" mitigation is enabled. For that specific scenario, KVM would skip clearing CPU buffers for the MMIO mitigation even though the kernel requested a clear on every VM-Enter. Note #2, the flaw goes back to the introduction of the MDS mitigation. The MDS mitigation was inadvertently fixed by commit 43fb862de8f6 ("KVM/VMX: Move VERW closer to VMentry for MDS mitigation"), but previous kernels that flush CPU buffers in vmx_vcpu_enter_exit() are affected (though it's unlikely the flaw is meaningfully exploitable even older kernels). Fixes: 650b68a0622f ("x86/kvm/vmx: Add MDS protection when L1D Flush is not active") Suggested-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Reviewed-by: Brendan Jackman <jackmanb@google.com> Link: https://patch.msgid.link/20251113233746.1703361-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-18x86/bugs: Use an x86 feature to track the MMIO Stale Data mitigationSean Christopherson
Convert the MMIO Stale Data mitigation tracking from a static branch into an x86 feature flag so that it can be used via ALTERNATIVE_2 in KVM. No functional change intended. Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Reviewed-by: Brendan Jackman <jackmanb@google.com> Link: https://patch.msgid.link/20251113233746.1703361-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-18x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as wellPawan Gupta
TSA mitigation: d8010d4ba43e ("x86/bugs: Add a Transient Scheduler Attacks mitigation") introduced VM_CLEAR_CPU_BUFFERS for guests on AMD CPUs. Currently on Intel CLEAR_CPU_BUFFERS is being used for guests which has a much broader scope (kernel->user also). Make mitigations on Intel consistent with TSA. This would help handling the guest-only mitigations better in future. Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> [sean: make CLEAR_CPU_BUF_VM mutually exclusive with the MMIO mitigation] Acked-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Brendan Jackman <jackmanb@google.com> Link: https://patch.msgid.link/20251113233746.1703361-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-18KVM: VMX: Use on-stack copy of @flags in __vmx_vcpu_run()Sean Christopherson
When testing for VMLAUNCH vs. VMRESUME, use the copy of @flags from the stack instead of first moving it to EBX, and then propagating VMX_RUN_VMRESUME to RFLAGS.CF (because RBX is clobbered with the guest value prior to the conditional branch to VMLAUNCH). Stashing information in RFLAGS is gross, especially with the writer and reader being bifurcated by yet more gnarly assembly code. Opportunistically drop the SHIFT macros as they existed purely to allow the VM-Enter flow to use Bit Test. Suggested-by: Borislav Petkov <bp@alien8.de> Acked-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Brendan Jackman <jackmanb@google.com> Link: https://patch.msgid.link/20251113233746.1703361-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-18KVM: x86: Allocate/free user_return_msrs at kvm.ko (un)loading timeChao Gao
Move user_return_msrs allocation/free from vendor modules (kvm-intel.ko and kvm-amd.ko) (un)loading time to kvm.ko's to make it less risky to access user_return_msrs in kvm.ko. Tying the lifetime of user_return_msrs to vendor modules makes every access to user_return_msrs prone to use-after-free issues as vendor modules may be unloaded at any time. Opportunistically turn the per-CPU variable into full structs, as there's no practical difference between statically allocating the memory and allocating it unconditionally during module_init(). Zero out kvm_nr_uret_msrs on vendor module exit to further minimize the chances of consuming stale data, and WARN on vendor module load if KVM thinks there are existing user-return MSRs. Note! The user-return MSRs also need to be "destroyed" if ops->hardware_setup() fails, as both SVM and VMX expect common KVM to clean up (because common code, not vendor code, is responsible for kvm_nr_uret_msrs). Signed-off-by: Chao Gao <chao.gao@intel.com> Co-developed-by: Sean Christopherson <seanjc@google.com> Link: https://patch.msgid.link/20251108013601.902918-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-18KVM: SVM: Fix redundant updates of LBR MSR interceptsYosry Ahmed
Don't update the LBR MSR intercept bitmaps if they're already up-to-date, as unconditionally updating the intercepts forces KVM to recalculate the MSR bitmaps for vmcb02 on every nested VMRUN. The redundant updates are functionally okay; however, they neuter an optimization in Hyper-V nested virtualization enlightenments and this manifests as a self-test failure. In particular, Hyper-V lets L1 mark "nested enlightenments" as clean, i.e. tell KVM that no changes were made to the MSR bitmap since the last VMRUN. The hyperv_svm_test KVM selftest intentionally changes the MSR bitmap "without telling KVM about it" to verify that KVM honors the clean hint, correctly fails because KVM notices the changed bitmap anyway: ==== Test Assertion Failure ==== x86/hyperv_svm_test.c:120: vmcb->control.exit_code == 0x081 pid=193558 tid=193558 errno=4 - Interrupted system call 1 0x0000000000411361: assert_on_unhandled_exception at processor.c:659 2 0x0000000000406186: _vcpu_run at kvm_util.c:1699 3 (inlined by) vcpu_run at kvm_util.c:1710 4 0x0000000000401f2a: main at hyperv_svm_test.c:175 5 0x000000000041d0d3: __libc_start_call_main at libc-start.o:? 6 0x000000000041f27c: __libc_start_main_impl at ??:? 7 0x00000000004021a0: _start at ??:? vmcb->control.exit_code == SVM_EXIT_VMMCALL Do *not* fix this by skipping svm_hv_vmcb_dirty_nested_enlightenments() when svm_set_intercept_for_msr() performs a no-op change. changes to the L0 MSR interception bitmap are only triggered by full CPUID updates and MSR filter updates, both of which should be rare. Changing svm_set_intercept_for_msr() risks hiding unintended pessimizations like this one, and is actually more complex than this change. Fixes: fbe5e5f030c2 ("KVM: nSVM: Always recalculate LBR MSR intercepts in svm_update_lbrv()") Cc: stable@vger.kernel.org Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev> Link: https://patch.msgid.link/20251112013017.1836863-1-yosry.ahmed@linux.dev [Rewritten commit message based on mailing list discussion. - Paolo] Reviewed-by: Sean Christopherson <seanjc@google.com> Tested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-11-17KVM: x86: remove comment about ntp correction sync forLei Chen
Since vcpu local clock is no longer affected by ntp, remove comment about ntp correction sync for function kvm_gen_kvmclock_update. Signed-off-by: Lei Chen <lei.chen@smartx.com> Link: https://patch.msgid.link/20250819152027.1687487-4-lei.chen@smartx.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-17Revert "x86: kvm: rate-limit global clock updates"Lei Chen
This reverts commit 7e44e4495a398eb553ce561f29f9148f40a3448f. Commit 7e44e4495a39 ("x86: kvm: rate-limit global clock updates") intends to use a kvmclock_update_work to sync ntp corretion across all vcpus kvmclock, which is based on commit 0061d53daf26f ("KVM: x86: limit difference between kvmclock updates") Since kvmclock has been switched to mono raw, this commit can be reverted. Signed-off-by: Lei Chen <lei.chen@smartx.com> Link: https://patch.msgid.link/20250819152027.1687487-3-lei.chen@smartx.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-17Revert "x86: kvm: introduce periodic global clock updates"Lei Chen
This reverts commit 332967a3eac06f6379283cf155c84fe7cd0537c2. Commit 332967a3eac0 ("x86: kvm: introduce periodic global clock updates") introduced a 300s interval work to sync ntp corrections across all vcpus. Since commit 53fafdbb8b21 ("KVM: x86: switch KVMCLOCK base to monotonic raw clock"), kvmclock switched to mono raw clock, we can no longer take ntp into consideration. Signed-off-by: Lei Chen <lei.chen@smartx.com> Link: https://patch.msgid.link/20250819152027.1687487-2-lei.chen@smartx.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-17KVM: x86: Grab lapic_timer in a local variable to cleanup periodic codeSean Christopherson
Stash apic->lapic_timer in a local "ktimer" variable in advance_periodic_target_expiration() to eliminate a few unaligned wraps, and to make the code easier to read overall. No functional change intended. Link: https://patch.msgid.link/20251113205114.1647493-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-17KVM: x86: Fix VM hard lockup after prolonged inactivity with periodic HV timerfuqiang wang
When advancing the target expiration for the guest's APIC timer in periodic mode, set the expiration to "now" if the target expiration is in the past (similar to what is done in update_target_expiration()). Blindly adding the period to the previous target expiration can result in KVM generating a practically unbounded number of hrtimer IRQs due to programming an expired timer over and over. In extreme scenarios, e.g. if userspace pauses/suspends a VM for an extended duration, this can even cause hard lockups in the host. Currently, the bug only affects Intel CPUs when using the hypervisor timer (HV timer), a.k.a. the VMX preemption timer. Unlike the software timer, a.k.a. hrtimer, which KVM keeps running even on exits to userspace, the HV timer only runs while the guest is active. As a result, if the vCPU does not run for an extended duration, there will be a huge gap between the target expiration and the current time the vCPU resumes running. Because the target expiration is incremented by only one period on each timer expiration, this leads to a series of timer expirations occurring rapidly after the vCPU/VM resumes. More critically, when the vCPU first triggers a periodic HV timer expiration after resuming, advancing the expiration by only one period will result in a target expiration in the past. As a result, the delta may be calculated as a negative value. When the delta is converted into an absolute value (tscdeadline is an unsigned u64), the resulting value can overflow what the HV timer is capable of programming. I.e. the large value will exceed the VMX Preemption Timer's maximum bit width of cpu_preemption_timer_multi + 32, and thus cause KVM to switch from the HV timer to the software timer (hrtimers). After switching to the software timer, periodic timer expiration callbacks may be executed consecutively within a single clock interrupt handler, because hrtimers honors KVM's request for an expiration in the past and immediately re-invokes KVM's callback after reprogramming. And because the interrupt handler runs with IRQs disabled, restarting KVM's hrtimer over and over until the target expiration is advanced to "now" can result in a hard lockup. E.g. the following hard lockup was triggered in the host when running a Windows VM (only relevant because it used the APIC timer in periodic mode) after resuming the VM from a long suspend (in the host). NMI watchdog: Watchdog detected hard LOCKUP on cpu 45 ... RIP: 0010:advance_periodic_target_expiration+0x4d/0x80 [kvm] ... RSP: 0018:ff4f88f5d98d8ef0 EFLAGS: 00000046 RAX: fff0103f91be678e RBX: fff0103f91be678e RCX: 00843a7d9e127bcc RDX: 0000000000000002 RSI: 0052ca4003697505 RDI: ff440d5bfbdbd500 RBP: ff440d5956f99200 R08: ff2ff2a42deb6a84 R09: 000000000002a6c0 R10: 0122d794016332b3 R11: 0000000000000000 R12: ff440db1af39cfc0 R13: ff440db1af39cfc0 R14: ffffffffc0d4a560 R15: ff440db1af39d0f8 FS: 00007f04a6ffd700(0000) GS:ff440db1af380000(0000) knlGS:000000e38a3b8000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000d5651feff8 CR3: 000000684e038002 CR4: 0000000000773ee0 PKRU: 55555554 Call Trace: <IRQ> apic_timer_fn+0x31/0x50 [kvm] __hrtimer_run_queues+0x100/0x280 hrtimer_interrupt+0x100/0x210 ? ttwu_do_wakeup+0x19/0x160 smp_apic_timer_interrupt+0x6a/0x130 apic_timer_interrupt+0xf/0x20 </IRQ> Moreover, if the suspend duration of the virtual machine is not long enough to trigger a hard lockup in this scenario, since commit 98c25ead5eda ("KVM: VMX: Move preemption timer <=> hrtimer dance to common x86"), KVM will continue using the software timer until the guest reprograms the APIC timer in some way. Since the periodic timer does not require frequent APIC timer register programming, the guest may continue to use the software timer in perpetuity. Fixes: d8f2f498d9ed ("x86/kvm: fix LAPIC timer drift when guest uses periodic mode") Cc: stable@vger.kernel.org Signed-off-by: fuqiang wang <fuqiang.wng@gmail.com> [sean: massage comments and changelog] Link: https://patch.msgid.link/20251113205114.1647493-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-17KVM: x86: Explicitly set new periodic hrtimer expiration in apic_timer_fn()fuqiang wang
When restarting an hrtimer to emulate a the guest's APIC timer in periodic mode, explicitly set the expiration using the target expiration computed by advance_periodic_target_expiration() instead of adding the period to the existing timer. This will allow making adjustments to the expiration, e.g. to deal with expirations far in the past, without having to implement the same logic in both advance_periodic_target_expiration() and apic_timer_fn(). Cc: stable@vger.kernel.org Signed-off-by: fuqiang wang <fuqiang.wng@gmail.com> [sean: split to separate patch, write changelog] Link: https://patch.msgid.link/20251113205114.1647493-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-17KVM: x86: WARN if hrtimer callback for periodic APIC timer fires with period=0Sean Christopherson
WARN and don't restart the hrtimer if KVM's callback runs with the guest's APIC timer in periodic mode but with a period of '0', as not advancing the hrtimer's deadline would put the CPU into an infinite loop of hrtimer events. Observing a period of '0' should be impossible, even when the hrtimer is running on a different CPU than the vCPU, as KVM is supposed to cancel the hrtimer before changing (or zeroing) the period, e.g. when switching from periodic to one-shot. Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20251113205114.1647493-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-17KVM: x86: Use "checked" versions of get_user() and put_user()Sean Christopherson
Use the normal, checked versions for get_user() and put_user() instead of the double-underscore versions that omit range checks, as the checked versions are actually measurably faster on modern CPUs (12%+ on Intel, 25%+ on AMD). The performance hit on the unchecked versions is almost entirely due to the added LFENCE on CPUs where LFENCE is serializing (which is effectively all modern CPUs), which was added by commit 304ec1b05031 ("x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec"). The small optimizations done by commit b19b74bc99b1 ("x86/mm: Rework address range check in get_user() and put_user()") likely shave a few cycles off, but the bulk of the extra latency comes from the LFENCE. Don't bother trying to open-code an equivalent for performance reasons, as the loss of inlining (e.g. see commit ea6f043fc984 ("x86: Make __get_user() generate an out-of-line call") is largely a non-factor (ignoring setups where RET is something entirely different), As measured across tens of millions of calls of guest PTE reads in FNAME(walk_addr_generic): __get_user() get_user() open-coded open-coded, no LFENCE Intel (EMR) 75.1 67.6 75.3 65.5 AMD (Turin) 68.1 51.1 67.5 49.3 Note, Hyper-V MSR emulation is not a remotely hot path, but convert it anyways for consistency, and because there is a general desire to remove __{get,put}_user() entirely. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Closes: https://lore.kernel.org/all/CAHk-=wimh_3jM9Xe8Zx0rpuf8CPDu6DkRCGb44azk0Sz5yqSnw@mail.gmail.com Cc: Borislav Petkov <bp@alien8.de> Link: https://patch.msgid.link/20251106210206.221558-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-14KVM: SEV: Add known supported SEV-SNP policy bitsTom Lendacky
Add to the known supported SEV-SNP policy bits that don't require any implementation support from KVM in order to successfully use them. At this time, this includes: - CXL_ALLOW - MEM_AES_256_XTS - RAPL_DIS - CIPHERTEXT_HIDING_DRAM - PAGE_SWAP_DISABLE Arguably, RAPL_DIS and CIPHERTEXT_HIDING_DRAM require KVM and the CCP driver to enable these features in order for the setting of the policy bits to be successfully handled. But, a guest owner may not wish their guest to run on a system that doesn't provide support for those features, so allowing the specification of these bits accomplishes that. Whether or not the bit is supported by SEV firmware, a system that doesn't support these features will either fail during the KVM validation of supported policy bits before issuing the LAUNCH_START or fail during the LAUNCH_START. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://patch.msgid.link/ec040de9864099cf592a97c201dc4cc110b2b0cf.1761593632.git.thomas.lendacky@amd.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-14KVM: SEV: Publish supported SEV-SNP policy bitsTom Lendacky
Define the set of policy bits that KVM currently knows as not requiring any implementation support within KVM. Provide this value to userspace via the KVM_GET_DEVICE_ATTR ioctl. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://patch.msgid.link/c596f7529518f3f826a57970029451d9385949e5.1761593632.git.thomas.lendacky@amd.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-14KVM: SEV: Consolidate the SEV policy bits in a single header fileTom Lendacky
Consolidate SEV policy bit definitions into a single file. Use include/linux/psp-sev.h to hold the definitions and remove the current definitions from the arch/x86/kvm/svm/sev.c and arch/x86/include/svm.h files. No functional change intended. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://patch.msgid.link/d9639f88a0b521a1a67aeac77cc609fdea1f90bd.1761593632.git.thomas.lendacky@amd.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-13KVM: SVM: Don't skip unrelated instruction if INT3/INTO is replacedOmar Sandoval
When re-injecting a soft interrupt from an INT3, INT0, or (select) INTn instruction, discard the exception and retry the instruction if the code stream is changed (e.g. by a different vCPU) between when the CPU executes the instruction and when KVM decodes the instruction to get the next RIP. As effectively predicted by commit 6ef88d6e36c2 ("KVM: SVM: Re-inject INT3/INTO instead of retrying the instruction"), failure to verify that the correct INTn instruction was decoded can effectively clobber guest state due to decoding the wrong instruction and thus specifying the wrong next RIP. The bug most often manifests as "Oops: int3" panics on static branch checks in Linux guests. Enabling or disabling a static branch in Linux uses the kernel's "text poke" code patching mechanism. To modify code while other CPUs may be executing that code, Linux (temporarily) replaces the first byte of the original instruction with an int3 (opcode 0xcc), then patches in the new code stream except for the first byte, and finally replaces the int3 with the first byte of the new code stream. If a CPU hits the int3, i.e. executes the code while it's being modified, then the guest kernel must look up the RIP to determine how to handle the #BP, e.g. by emulating the new instruction. If the RIP is incorrect, then this lookup fails and the guest kernel panics. The bug reproduces almost instantly by hacking the guest kernel to repeatedly check a static branch[1] while running a drgn script[2] on the host to constantly swap out the memory containing the guest's TSS. [1]: https://gist.github.com/osandov/44d17c51c28c0ac998ea0334edf90b5a [2]: https://gist.github.com/osandov/10e45e45afa29b11e0c7209247afc00b Fixes: 6ef88d6e36c2 ("KVM: SVM: Re-inject INT3/INTO instead of retrying the instruction") Cc: stable@vger.kernel.org Co-developed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Omar Sandoval <osandov@fb.com> Link: https://patch.msgid.link/1cc6dcdf36e3add7ee7c8d90ad58414eeb6c3d34.1762278762.git.osandov@fb.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-13KVM: TDX: Use struct_size to simplify tdx_get_capabilities()Sean Christopherson
Use struct_size() instead of manually calculating the number of bytes to allocate for 'caps', including the nested flexible array, and copy all of 'caps' to user space with a single copy_to_user() call (thanks to the full size being provided by struct_size()). Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Tested-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Link: https://patch.msgid.link/20251017213914.167301-1-thorsten.blum@linux.dev [sean: separate from swap of get_user() vs. kzalloc() ordering] Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-13KVM: TDX: Check size of user's kvm_tdx_capabilities array before allocatingThorsten Blum
When userspace is getting TDX capabilities, retrieve and check the number of user entries before allocating kernel scratch space to avoid having to unwind the allocation if get_user() fails or if 'user_caps' is too small to fit 'caps'. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Tested-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Link: https://patch.msgid.link/20251017213914.167301-1-thorsten.blum@linux.dev [sean: split to separate patch] Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-13KVM: TDX: Fix sparse warnings from using 0 for NULLDave Hansen
Stop using 0 for NULL. sparse moans: ... arch/x86/kvm/vmx/tdx.c:859:38: warning: Using plain integer as NULL pointer for several TDX pointer initializations. While I love a good ptr=0 now and then, it's good to have quiet sparse builds. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Fixes: a50f673f25e0 ("KVM: TDX: Do TDX specific vcpu initialization") Fixes: 8d032b683c29 ("KVM: TDX: create/destroy VM structure") Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Xiaoyao Li <xiaoyao.li@intel.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: x86@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Kirill A. Shutemov" <kas@kernel.org> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Acked-by: Kiryl Shutsemau <kas@kernel.org> Link: https://patch.msgid.link/20251103234439.DC8227E4@davehans-spike.ostc.intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-13KVM: TDX: Remove __user annotation from kernel pointerDave Hansen
Separate __user pointer variable declaration from kernel one. There are two 'kvm_cpuid2' pointers involved here. There's an "input" side: 'td_cpuid' which is a normal kernel pointer and an 'output' side. The output here is userspace and there is an attempt at properly annotating the variable with __user: struct kvm_cpuid2 __user *output, *td_cpuid; But, alas, this is wrong. The __user in the definition applies to both 'output' and 'td_cpuid'. Sparse notices the address space mismatch and will complain about it. Fix it up by completely separating the two definitions so that it is obviously correct without even having to know what the C syntax rules even are. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Fixes: 488808e682e7 ("KVM: x86: Introduce KVM_TDX_GET_CPUID") Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Xiaoyao Li <xiaoyao.li@intel.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: x86@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Kirill A. Shutemov" <kas@kernel.org> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Acked-by: Kiryl Shutsemau <kas@kernel.org> Link: https://patch.msgid.link/20251103234437.A0532420@davehans-spike.ostc.intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2025-11-13KVM: TDX: Take MMU lock around tdh_vp_init()Rick Edgecombe
Take MMU lock around tdh_vp_init() in KVM_TDX_INIT_VCPU to prevent meeting contention during retries in some no-fail MMU paths. The TDX module takes various try-locks internally, which can cause SEAMCALLs to return an error code when contention is met. Dealing with an error in some of the MMU paths that make SEAMCALLs is not straight forward, so KVM takes steps to ensure that these will meet no contention during a single BUSY error retry. The whole scheme relies on KVM to take appropriate steps to avoid making any SEAMCALLs that could contend while the retry is happening. Unfortunately, there is a case where contention could be met if userspace does something unusual. Specifically, hole punching a gmem fd while initializing the TD vCPU. The impact would be triggering a KVM_BUG_ON(). The resource being contended is called the "TDR resource" in TDX docs parlance. The tdh_vp_init() can take this resource as exclusive if the 'version' passed is 1, which happens to be version the kernel passes. The various MMU operations (tdh_mem_range_block(), tdh_mem_track() and tdh_mem_page_remove()) take it as shared. There isn't a KVM lock that maps conceptually and in a lock order friendly way to the TDR lock. So to minimize infrastructure, just take MMU lock around tdh_vp_init(). This makes the operations we care about mutually exclusive. Since the other operations are under a write mmu_lock, the code could just take the lock for read, however this is weirdly inverted from the actual underlying resource being contended. Since this is covering an edge case that shouldn't be hit in normal usage, be a little less weird and take the mmu_lock for write around the call. Fixes: 02ab57707bdb ("KVM: TDX: Implement hooks to propagate changes of TDP MMU mirror page table") Reported-by: Yan Zhao <yan.y.zhao@intel.com> Suggested-by: Yan Zhao <yan.y.zhao@intel.com> Link: https://patch.msgid.link/20251028002824.1470939-1-rick.p.edgecombe@intel.com Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> [sean: tweak comment and capture PUNCH_HOLE interaction] Signed-off-by: Sean Christopherson <seanjc@google.com>