diff options
| author | Uladzislau Rezki (Sony) <urezki@gmail.com> | 2025-09-17 20:59:06 +0200 |
|---|---|---|
| committer | Andrew Morton <akpm@linux-foundation.org> | 2025-09-23 14:14:16 -0700 |
| commit | 7ef5268a907534c4e6373b0d3fe45e0b3d95bfe2 (patch) | |
| tree | aafee40aec9cf1b3c14e32f3098629b349179ae8 /mm/vmalloc.c | |
| parent | 1b00ab48892fe6115618e2c81f9c1891ad0c0a5a (diff) | |
mm/vmalloc: move resched point into alloc_vmap_area()
Currently vm_area_alloc_pages() contains two cond_resched() points.
However, the page allocator already has its own in slow path so an extra
resched is not optimal because it delays the loops.
The place where CPU time can be consumed is in the VA-space search in
alloc_vmap_area(), especially if the space is really fragmented using
synthetic stress tests, after a fast path falls back to a slow one.
Move a single cond_resched() there, after dropping free_vmap_area_lock in
a slow path. This keeps fairness where it matters while removing
redundant yields from the page-allocation path.
[akpm@linux-foundation.org: tweak comment grammar]
Link: https://lkml.kernel.org/r/20250917185906.1595454-1-urezki@gmail.com
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/vmalloc.c')
| -rw-r--r-- | mm/vmalloc.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4249e1e01947..798b2ed21e46 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2057,6 +2057,12 @@ retry: addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list, size, align, vstart, vend); spin_unlock(&free_vmap_area_lock); + + /* + * This is not a fast path. Check if yielding is needed. This + * is the only reschedule point in the vmalloc() path. + */ + cond_resched(); } trace_alloc_vmap_area(addr, size, align, vstart, vend, IS_ERR_VALUE(addr)); @@ -3622,7 +3628,6 @@ vm_area_alloc_pages(gfp_t gfp, int nid, pages + nr_allocated); nr_allocated += nr; - cond_resched(); /* * If zero or pages were obtained partly, @@ -3664,7 +3669,6 @@ vm_area_alloc_pages(gfp_t gfp, int nid, for (i = 0; i < (1U << order); i++) pages[nr_allocated + i] = page + i; - cond_resched(); nr_allocated += 1U << order; } |