diff options
| author | Mark Brown <broonie@kernel.org> | 2020-09-09 16:27:47 +0100 |
|---|---|---|
| committer | Mark Brown <broonie@kernel.org> | 2020-09-09 16:27:47 +0100 |
| commit | 6c557d24fa2622bca5b950dae791d3f0d18ff2d3 (patch) | |
| tree | 5ff1ca8c699b0485a6a1c3e3b7cffc0874e7ad56 /arch/x86/mm/fault.c | |
| parent | 4ebf8816e35d63db723d95f8e49d8455be926c36 (diff) | |
| parent | 062cf7fc927d2546b58ed128383e5c52f26a00a5 (diff) | |
Merge series "opp: Unconditionally call dev_pm_opp_of_remove_table()" from Viresh Kumar <viresh.kumar@linaro.org>:
Hello,
This cleans up some of the user code around calls to
dev_pm_opp_of_remove_table().
All the patches can be picked by respective maintainers directly except
for the last patch, which needs the previous two to get merged first.
These are based for 5.9-rc1.
Rajendra, Since most of these changes are related to qcom stuff, it
would be great if you can give them a try. I wasn't able to test them
due to lack of hardware.
Ulf, I had to revise the sdhci patch, sorry about that. Please pick this
one.
Diff between V1 and V2 is mentioned in each of the patches separately.
Viresh Kumar (8):
cpufreq: imx6q: Unconditionally call dev_pm_opp_of_remove_table()
drm/lima: Unconditionally call dev_pm_opp_of_remove_table()
drm/msm: Unconditionally call dev_pm_opp_of_remove_table()
mmc: sdhci-msm: Unconditionally call dev_pm_opp_of_remove_table()
spi: spi-geni-qcom: Unconditionally call dev_pm_opp_of_remove_table()
spi: spi-qcom-qspi: Unconditionally call dev_pm_opp_of_remove_table()
tty: serial: qcom_geni_serial: Unconditionally call
dev_pm_opp_of_remove_table()
qcom-geni-se: remove has_opp_table
drivers/cpufreq/imx6q-cpufreq.c | 10 ++--------
drivers/gpu/drm/lima/lima_devfreq.c | 6 +-----
drivers/gpu/drm/lima/lima_devfreq.h | 1 -
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 14 +++++---------
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h | 1 -
drivers/gpu/drm/msm/dsi/dsi_host.c | 8 ++------
drivers/mmc/host/sdhci-msm.c | 14 +++++---------
drivers/spi/spi-geni-qcom.c | 13 +++++--------
drivers/spi/spi-qcom-qspi.c | 15 ++++++---------
drivers/tty/serial/qcom_geni_serial.c | 13 +++++--------
include/linux/qcom-geni-se.h | 2 --
11 files changed, 31 insertions(+), 66 deletions(-)
base-commit: f4d51dffc6c01a9e94650d95ce0104964f8ae822
--
2.25.0.rc1.19.g042ed3e048af
Diffstat (limited to 'arch/x86/mm/fault.c')
| -rw-r--r-- | arch/x86/mm/fault.c | 78 |
1 files changed, 78 insertions, 0 deletions
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 35f1498e9832..6e3e8a124903 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -190,6 +190,53 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address) return pmd_k; } +/* + * Handle a fault on the vmalloc or module mapping area + * + * This is needed because there is a race condition between the time + * when the vmalloc mapping code updates the PMD to the point in time + * where it synchronizes this update with the other page-tables in the + * system. + * + * In this race window another thread/CPU can map an area on the same + * PMD, finds it already present and does not synchronize it with the + * rest of the system yet. As a result v[mz]alloc might return areas + * which are not mapped in every page-table in the system, causing an + * unhandled page-fault when they are accessed. + */ +static noinline int vmalloc_fault(unsigned long address) +{ + unsigned long pgd_paddr; + pmd_t *pmd_k; + pte_t *pte_k; + + /* Make sure we are in vmalloc area: */ + if (!(address >= VMALLOC_START && address < VMALLOC_END)) + return -1; + + /* + * Synchronize this task's top level page-table + * with the 'reference' page table. + * + * Do _not_ use "current" here. We might be inside + * an interrupt in the middle of a task switch.. + */ + pgd_paddr = read_cr3_pa(); + pmd_k = vmalloc_sync_one(__va(pgd_paddr), address); + if (!pmd_k) + return -1; + + if (pmd_large(*pmd_k)) + return 0; + + pte_k = pte_offset_kernel(pmd_k, address); + if (!pte_present(*pte_k)) + return -1; + + return 0; +} +NOKPROBE_SYMBOL(vmalloc_fault); + void arch_sync_kernel_mappings(unsigned long start, unsigned long end) { unsigned long addr; @@ -1110,6 +1157,37 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code, */ WARN_ON_ONCE(hw_error_code & X86_PF_PK); +#ifdef CONFIG_X86_32 + /* + * We can fault-in kernel-space virtual memory on-demand. The + * 'reference' page table is init_mm.pgd. + * + * NOTE! We MUST NOT take any locks for this case. We may + * be in an interrupt or a critical region, and should + * only copy the information from the master page table, + * nothing more. + * + * Before doing this on-demand faulting, ensure that the + * fault is not any of the following: + * 1. A fault on a PTE with a reserved bit set. + * 2. A fault caused by a user-mode access. (Do not demand- + * fault kernel memory due to user-mode accesses). + * 3. A fault caused by a page-level protection violation. + * (A demand fault would be on a non-present page which + * would have X86_PF_PROT==0). + * + * This is only needed to close a race condition on x86-32 in + * the vmalloc mapping/unmapping code. See the comment above + * vmalloc_fault() for details. On x86-64 the race does not + * exist as the vmalloc mappings don't need to be synchronized + * there. + */ + if (!(hw_error_code & (X86_PF_RSVD | X86_PF_USER | X86_PF_PROT))) { + if (vmalloc_fault(address) >= 0) + return; + } +#endif + /* Was the fault spurious, caused by lazy TLB invalidation? */ if (spurious_kernel_fault(hw_error_code, address)) return; |