diff options
| author | Marco Crivellari <marco.crivellari@suse.com> | 2025-09-18 16:24:26 +0200 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2025-09-22 17:40:30 -0700 |
| commit | 5fd8bb982e10f29e856ef71072609af5ce55d281 (patch) | |
| tree | 3fe1eced876cffb4938404f42c5e4b6f997889f9 /net/devlink/core.c | |
| parent | 9870d350e45a5724ee25f77aa0b6d053c9b766db (diff) | |
net: replace use of system_wq with system_percpu_wq
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
system_unbound_wq should be the default workqueue so as not to enforce
locality constraints for random work whenever it's not required.
Adding system_dfl_wq to encourage its use when unbound work should be used.
The old system_unbound_wq will be kept for a few release cycles.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Link: https://patch.msgid.link/20250918142427.309519-3-marco.crivellari@suse.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net/devlink/core.c')
| -rw-r--r-- | net/devlink/core.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/net/devlink/core.c b/net/devlink/core.c index 7203c39532fc..58093f49c090 100644 --- a/net/devlink/core.c +++ b/net/devlink/core.c @@ -320,7 +320,7 @@ static void devlink_release(struct work_struct *work) void devlink_put(struct devlink *devlink) { if (refcount_dec_and_test(&devlink->refcount)) - queue_rcu_work(system_wq, &devlink->rwork); + queue_rcu_work(system_percpu_wq, &devlink->rwork); } struct devlink *devlinks_xa_find_get(struct net *net, unsigned long *indexp) |