diff options
| author | Thomas Gleixner <tglx@linutronix.de> | 2025-11-19 18:27:09 +0100 |
|---|---|---|
| committer | Thomas Gleixner <tglx@linutronix.de> | 2025-11-25 19:45:40 +0100 |
| commit | b0c3d51b54f8a4f4c809432d210c0c983d5cd97e (patch) | |
| tree | 7c76e8d9f0631969fa9a2833d950b7394fbb4787 /kernel/sched/sched.h | |
| parent | bf070520e398679cd582b3c3e44107bf22c143ba (diff) | |
sched/mmcid: Provide precomputed maximal value
Reading mm::mm_users and mm:::mm_cid::nr_cpus_allowed every time to compute
the maximal CID value is just wasteful as that value is only changing on
fork(), exit() and eventually when the affinity changes.
So it can be easily precomputed at those points and provided in mm::mm_cid
for consumption in the hot path.
But there is an issue with using mm::mm_users for accounting because that
does not necessarily reflect the number of user space tasks as other kernel
code can take temporary references on the MM which skew the picture.
Solve that by adding a users counter to struct mm_mm_cid, which is modified
by fork() and exit() and used for precomputing under mm_mm_cid::lock.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://patch.msgid.link/20251119172549.832764634@linutronix.de
Diffstat (limited to 'kernel/sched/sched.h')
| -rw-r--r-- | kernel/sched/sched.h | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 31f2e431db5e..d539fb269957 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3571,7 +3571,7 @@ static inline bool mm_cid_get(struct task_struct *t) struct mm_struct *mm = t->mm; unsigned int max_cids; - max_cids = min_t(int, READ_ONCE(mm->mm_cid.nr_cpus_allowed), atomic_read(&mm->mm_users)); + max_cids = READ_ONCE(mm->mm_cid.max_cids); /* Try to reuse the last CID of this task */ if (__mm_cid_get(t, t->mm_cid.last_cid, max_cids)) @@ -3614,7 +3614,6 @@ static inline void switch_mm_cid(struct task_struct *prev, struct task_struct *n } #else /* !CONFIG_SCHED_MM_CID: */ -static inline void init_sched_mm_cid(struct task_struct *t) { } static inline void mm_cid_select(struct task_struct *t) { } static inline void switch_mm_cid(struct task_struct *prev, struct task_struct *next) { } #endif /* !CONFIG_SCHED_MM_CID */ |