diff options
| author | Caleb Sander Mateos <csander@purestorage.com> | 2025-10-31 14:34:28 -0600 |
|---|---|---|
| committer | Jens Axboe <axboe@kernel.dk> | 2025-11-03 08:31:26 -0700 |
| commit | 4531d165ee39edb315b42a4a43e29339fa068e51 (patch) | |
| tree | c506839e5616faf9dd2ec5e944bf52ad14ea8a17 | |
| parent | 8cd5a59e4d512c6e1df47bf8ce60f7d16e4b3c18 (diff) | |
io_uring: only call io_should_terminate_tw() once for ctx
io_fallback_req_func() calls io_should_terminate_tw() on each req's ctx.
But since the reqs all come from the ctx's fallback_llist, req->ctx will
be ctx for all of the reqs. Therefore, compute ts.cancel as
io_should_terminate_tw(ctx) just once, outside the loop.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
| -rw-r--r-- | io_uring/io_uring.c | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 93a1cc2bf383..4e6676ac4662 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -289,10 +289,9 @@ static __cold void io_fallback_req_func(struct work_struct *work) percpu_ref_get(&ctx->refs); mutex_lock(&ctx->uring_lock); - llist_for_each_entry_safe(req, tmp, node, io_task_work.node) { - ts.cancel = io_should_terminate_tw(req->ctx); + ts.cancel = io_should_terminate_tw(ctx); + llist_for_each_entry_safe(req, tmp, node, io_task_work.node) req->io_task_work.func(req, ts); - } io_submit_flush_completions(ctx); mutex_unlock(&ctx->uring_lock); percpu_ref_put(&ctx->refs); |