diff options
| author | Thomas Gleixner <tglx@linutronix.de> | 2025-08-13 16:57:55 +0200 |
|---|---|---|
| committer | Andrew Morton <akpm@linux-foundation.org> | 2025-09-13 16:55:01 -0700 |
| commit | ec45783fce52f358c9e8680d2837bc0d477f16ad (patch) | |
| tree | 0ddaac40adf4f46dd899a5b645c81b5b4970fa8a /mm/memcontrol.c | |
| parent | 868ade323e9deff67b8be3e93876596e4d2c71d3 (diff) | |
memcg: optimize exit to user space
memcg uses TIF_NOTIFY_RESUME to handle reclaiming on exit to user space.
TIF_NOTIFY_RESUME is a multiplexing TIF bit, which is utilized by other
entities as well.
This results in a unconditional mem_cgroup_handle_over_high() call for
every invocation of resume_user_mode_work(), which is a pointless exercise
as most of the time there is no reclaim work to do.
Especially since RSEQ is used by glibc, TIF_NOTIFY_RESUME is raised quite
frequently and the empty calls show up in exit path profiling.
Optimize this by doing a quick check of the reclaim condition before
invoking it.
[akpm@linux-foundation.org: remove now-unneeded test of memcg_nr_pages_over_high==0, per Shakeel]
Link: https://lkml.kernel.org/r/87tt2b6zgs.ffs@tglx
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/memcontrol.c')
| -rw-r--r-- | mm/memcontrol.c | 7 |
1 files changed, 2 insertions, 5 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 8dd7fbed5a94..9712a751690f 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2203,7 +2203,7 @@ static unsigned long calculate_high_delay(struct mem_cgroup *memcg, * try_charge() (context permitting), as well as from the userland * return path where reclaim is always able to block. */ -void mem_cgroup_handle_over_high(gfp_t gfp_mask) +void __mem_cgroup_handle_over_high(gfp_t gfp_mask) { unsigned long penalty_jiffies; unsigned long pflags; @@ -2213,9 +2213,6 @@ void mem_cgroup_handle_over_high(gfp_t gfp_mask) struct mem_cgroup *memcg; bool in_retry = false; - if (likely(!nr_pages)) - return; - memcg = get_mem_cgroup_from_mm(current->mm); current->memcg_nr_pages_over_high = 0; @@ -2486,7 +2483,7 @@ done_restock: if (current->memcg_nr_pages_over_high > MEMCG_CHARGE_BATCH && !(current->flags & PF_MEMALLOC) && gfpflags_allow_blocking(gfp_mask)) - mem_cgroup_handle_over_high(gfp_mask); + __mem_cgroup_handle_over_high(gfp_mask); return 0; } |